AI in Education Podcast
Dan Bowen and Ray Fleming are experienced education renegades who have worked in many various educational institutions and educational companies across the world. They talk about Artificial Intelligence in Education - what it is, how it works, and the different ways it is being used. It's not too serious, or too technical, and is intended to be a good conversation. Please note the views on the podcast are our own or those of our guests, and not of our respective employers (unless we say otherwise at the time!)
info_outline
Meet the weird new jobs AI just invented
11/13/2025
Meet the weird new jobs AI just invented
In this episode of the AI in Education Podcast, Ray and Dan wrap up Series 14 with a packed news and research roundup. They start with the tricky world of AI governance in education, where Ray explains how schools and universities can simplify their policies instead of writing 26 new ones. The conversation then turns to a Washington Post piece on the rise of new AI-driven jobs - from conversation designers to human-AI collaboration leads - and what this means for the future of work and capability-building. They also unpack new insights from cechat about how teachers are creating and using AI agents, explore Microsoft’s AI Diffusion report, and look at La Trobe University’s staff chatbot, “Troby.” They discuss Google’s education research, Claude’s pilot in Icelandic schools, and the latest update from OpenAI, before closing with a fascinating study on how students respond differently to teacher versus AI feedback. Listen in for practical insights, fresh data, and a few laughs along the way. News As AI reshapes the job market, here are 16 roles it has created - Washington Post CENet analyses teacher created AI agents Microsoft AI Diffusion research Mustafa Suleyman - Human super intelligence Microsoft will offer in-country data processing in Australia & UK for Microsoft 365 Copilot Case Study "La Trobe University supercharges academic productivity with AI and Copilot Studio" New Google paper on AI and the future of learning Iceland goes Anthropic Open AI - ChatGPT's new personalities Competitions for students to get involved in: CSIRO want you to predict pasture biomass from images - global https://www.kaggle.com/competitions/csiro-biomass United States Artificial Intelligence Institute Hackathon - US only How confidential is your chat with AI? Research Teacher, peer, or AI? Comparing effects of feedback sources in higher education
/episode/index/show/aiineducationpodcast/id/39017660
info_outline
Education on Country: Peta-Anne Toohey on AI and Data Sovereignty
11/06/2025
Education on Country: Peta-Anne Toohey on AI and Data Sovereignty
In this episode of the AI in Education Podcast, hosts Dan and Ray welcome , Social Reciprocity Manager at Indigital, Australia’s first Indigenous-owned digital training company. Together they explore how generative AI intersects with Indigenous knowledge systems, and why cultural safety, data sovereignty, and community-led design must be central to any tech or education initiative. Peta shares powerful stories from her work in Cape York, where communities are building digital skills on Country through augmented reality, drones, and caring-for-country technologies. She unpacks what it means to create culturally safe technologies, how free, prior and informed consent should shape AI use, and why decolonising how we think about technology is essential for equity in education. It’s a fascinating discussion on how AI can empower, or endanger, Indigenous communities, and what educators and universities can learn from truly collaborative design. Find Peta on LinkedIn: Links - Organisations, people and projects mentioned InDigital - Local Contexts - Terri Janke - Google's Indigenous Language Projects Google and language researchers team up to teach AI Aboriginal English Woolaroo: a new tool for exploring indigenous languages Microsoft's Indigenous AI Projects Modis delivers first-of-its-kind Aboriginal language app to help break down communication barriers AI technology helps protect sea turtle nests from feral pigs in north Queensland AI transforms Kakadu management
/episode/index/show/aiineducationpodcast/id/38935425
info_outline
Inside the AI Classroom: Dan & Ray’s Big AI-in-Education Download
10/30/2025
Inside the AI Classroom: Dan & Ray’s Big AI-in-Education Download
Inside the AI Classroom: Dan & Ray’s Big AI-in-Education Download In this fast-paced news roundup, Dan and Ray dive head-first into the latest research and developments shaping AI in education. From MIT’s Perspectives for the Perplexed guide for schools, to McKinsey’s take on “agentic AI,” to Google’s LearnLM experiments with AI-powered textbooks, the duo unpack what every educator needs to know right now. They explore what’s happening inside classrooms, universities, and edtech labs — including new findings on AI literacy, evolving assessment design, and why “policing AI use” misses the point. Plus, they debate the rise of AI-integrated browsers like ChatGPT Atlas, what it means for assessment integrity, and how tools like Microsoft Copilot are reshaping both teaching and admin work. It’s the ultimate AI-in-education briefing — thoughtful, fast, and full of insights (and laughs) from two of the field’s most passionate voices. Here's all the links to news and research mentioned in the podcast (and, most importantly) the Two Ronnies Fork Handles sketch! Fork Handles News MIT "Guide to AI in Schools: Perspectives for the Perplexed" One year of agentic AI: Six lessons from the people doing the work OpenAI Atlas (and Perplexity Comet) An Opinionated Guide to Using AI Right Now Google "Learn Your Way" pilot Towards an AI-Augmented Textbook Experimentally Testing AI-Powered Content Transformations on Student Learning PEW Research into AI attitudes around the world Copilot in Windows Copilot consumer updates M365 Copilot Education updates BBC: The lecturers learning to spot AI misconduct UNE are rolling out their Madgwick AI system to all students Research The Bubble and Burner Model of AI-Infusion: A Framework for Teaching and Learning Firm or Fickle? Evaluating Large Language Models Consistency in Sequential Interactions GASLIGHTBENCH: Quantifying LLM Susceptibility to Social Prompting, What does ‘good teaching’ mean in the AI age? How university students work on assessment tasks with generative artificial intelligence: matters of judgement AI Knows Best? The Paradox Of Expertise, Ai-Reliance, And Performance In Educational Tutoring Decision-Making Tasks
/episode/index/show/aiineducationpodcast/id/38840730
info_outline
Becoming More Human in the Age of AI at University
10/23/2025
Becoming More Human in the Age of AI at University
Becoming More Human in the Age of AI at University What happens when AI knows everything - and humans must rediscover what makes us unique? In this episode, host Ray Fleming sits down with , to explore how artificial intelligence is reshaping not just libraries, but the very identity of education itself. Carlo shares how librarians are helping students and academics navigate AI’s rapid rise - guiding them to think critically, question deeply, and find their authentic voice in an age of infinite information. Together, they unpack how AI is pushing universities to move beyond expertise and towards empathy, collaboration, and humility, and why becoming “more human” may be the most important skill of all. Read Carlo's writing on Hybrid Horizons: Exploring Human–AI Collaboration Follow Carlo on Twitter You may also find the Charles Sturt University Library website on Generative AI at University really useful
/episode/index/show/aiineducationpodcast/id/38723150
info_outline
The AI Mayhem Episode
10/16/2025
The AI Mayhem Episode
In this week’s episode of the AI and Education Podcast, Ray and Dan dive into one of the most chaotic – and entertaining – weeks in AI news so far. From councils losing millions to AI-powered scams to the idea of having a “family safe word,” this one swings between hilarious and hair-raising. They unpack what’s new in AI assessment research - including TEQSA’s AI guidance for universities, the “wicked problem” of AI and assessment, and why Turnitin’s detection tools are under fire (again). You’ll hear how South Australia’s EdChat report shows teachers and students deepening their learning with AI, and which countries are quietly leading the world in classroom AI use (spoiler: it’s not who you think). Plus, a few surprise stats on politeness and prompt-writing - turns out being rude to AI might actually get better results. We've just arrived on YouTube and TikTok! YT Channel - the podcast in video form, and Shorts TikTok Links to news items discussed AI Safety Mike Tholfsen's Microsoft 365 Copilot Tutorial South Australia's Edchat Insights Report Enacting assessment reform in a time of artificial intelligence And the link to 'Assessment reform for the age of artificial intelligence' - Syracuse University gives Claude Education to all students and staff Jordan - the whole country, one man chat app for education California Community Colleges also rolling out Nectir to staff and students 2025 "State of AI" report Oxford University Press report on AI use by UK school students OECD’s latest Teaching and Learning International Survey University wrongly accuses students of using artificial intelligence to cheat ACU's checklist for spotting AI written text: And in researching this, I also stumbled across the Wikipedia page "Signs of AI Writing" And that includes things like "LLMs overuse the rule of three - 'the good, the bad and the ugly'"; the use of Title Case; and our old friends em dashes and emojis. And if you really want to go down the rabbit hole, read the 'Talk' tab on that page, were people are discussing their own opinions/beliefs on this. Research The wicked problem of AI and assessment Reimagining the Artificial Intelligence Assessment Scale: A refined framework for educational assessment Assessment Twins: A Protocol for AI-Vulnerable Summative Assessment Heads we win, tails you lose: AI detectors in education. What Does YouTube Advise Students About Bypassing AIText Detection Tools? A Pragmatic Analysis Mind Your Tone: Investigating How Prompt Politeness Affects LLM Accuracy
/episode/index/show/aiineducationpodcast/id/38602590
info_outline
Humans in AI – Creativity, Wellbeing & Technology in Education
10/09/2025
Humans in AI – Creativity, Wellbeing & Technology in Education
Humans in AI – Creativity, Wellbeing & Technology in Education a researchers perspective Guest: Dr Rebecca Marrone, Lecturer & Researcher, University of South Australia In this episode, Dan and Ray welcome Dr Rebecca Marrone to discuss the intersection of AI, creativity, and wellbeing in education. Her research explores how technology, especially AI, is transforming the educational landscape for both teachers and students. Key Topics AI’s Impact on Teacher & Student Wellbeing Creativity & Critical Thinking in the Age of AI Ethical Risks of AI in Education Strategies for digital wellbeing and critical engagement. The evolving role of soft skills alongside technological advancements. Connect & Learn More For more about Dr Rebecca Marrone’s research, listeners can reach out directly or explore her work online. Find Rebecca on and read her research publications via We mentioned Rebecca and Vitomir's critique of the Brainrot study In the podcast Rebecca highlighted George Siemens' research and newsletters: Newsletter archive: Google Scholar profile: Research mentioned during this episode Marrone, R., Taddeo, V., & Hill, G. (2022). Creativity and artificial intelligence—A student perspective. Journal of Intelligence, 10(3), 65. Marrone, R., Zamecnik, A., Joksimovic, S., Johnson, J., & De Laat, M. (2025). Understanding student perceptions of artificial intelligence as a teammate. Technology, Knowledge and Learning, 30(3), 1847-1869. Marrone, R., Cropley, D. H., & Wang, Z. (2022). Automatic Assessment of Mathematical Creativity using Natural Language Processing. Creativity Research Journal, 35(4), 661–676. Cropley, D. H., Theurer, C., Mathijssen, A. C. S., & Marrone, R. L. (2024). Fit-For-Purpose Creativity Assessment: Automatic Scoring of the Test of Creative Thinking – Drawing Production (TCT-DP). Creativity Research Journal, 1–16. https://doi.org/10.1080/10400419.2024.2339667
/episode/index/show/aiineducationpodcast/id/38519535
info_outline
Schools and universities fast-track AI rollouts: from Oxford to Australia
10/02/2025
Schools and universities fast-track AI rollouts: from Oxford to Australia
This week, Dan and Ray bring a whirlwind of AI news, research, and reflection from across the education world. From South Australia and New South Wales announcing state-wide AI chatbot rollouts for schools, to Oxford University embracing ChatGPT Education for all staff and students, the scale of adoption is hard to ignore. The hosts explore what these bold moves mean for schools, universities, and the future of assessment. They highlight contrasts between Australia’s rapid school-level deployments and the slower university approach, and compare these with global examples such as Arizona State University’s 158,000-student rollout. The conversation doesn’t stop at institutions. Ray and Dan unpack new releases from Microsoft, OpenAI, and Google - including multi-model Copilot, parental controls in ChatGPT, and the startling realism of Sora 2 videos. They also reflect on recent surveys showing student demand for clearer AI guidance. Packed with insights, surprises, and a few laughs, this episode shows why AI in education is evolving faster than ever. Links and References News SA High Schools get EdChat for all students NSW EduChat for all staff and students in Years 5 to 12 in NSW government schools ASU signs up for ChatGPT Edu for every staff member, researcher and student UNSW signs Australia’s biggest education deal with OpenAI to roll out ChatGPT to staff Oxford University in the UK just announced they’d do it for all staff and students Google signs up 2 million students and staff via California Community Colleges Microsoft M365 Copilot Chat now free in Office apps Copilot now allows Claude/Anthropic models for Researcher & Agents OpenAI launch parental controls for ChatGPT OpenAI release Sora2 OpenAI on X: OpenAI Prompt Packs Prompt Guides for education: K12 IT Managers Faculty School administrators Google Homework Help story Google is indexing ChatGPT conversations, potentially exposing sensitive user data IPSOS Education Monitor You gov survey on UK student use of AI Exam Hack AI Arden University students get AI detector shut down Research Prompt Injection Attacks on LLM Generated Reviews of Scientific Publications The Transparency Dilemma: How AI disclosure erodes trust
/episode/index/show/aiineducationpodcast/id/38436640
info_outline
From Learning to Earning: Education in an AI Age with David Yip
09/25/2025
From Learning to Earning: Education in an AI Age with David Yip
In this episode of the podcast, Dan and Ray sit down with - former Salesforce Director for Education in APAC, UNSW Business School advisory board member, and host of the Relearning Work podcast. Together, they explore how education must adapt in a world shaped by AI, where learning and earning can no longer be separate. David shares insights from his work in big tech, his leadership with the Future Skills Organisation, and his new platform , designed to bridge the gap between study and employment. This conversation unpacks themes of scale, the need to shift from finding answers to asking questions, and why the future of learning must be deeply human-centred - even in a tech-driven world. Resources We talked about some of the stories we'd heard on David's own podcast, , which is on and . During the interview the episodes we mentioned specifically were: - noted for scaling the institution massively and innovating education delivery and - Discussed shifting from finding answers to asking questions - Talked about the and micro-credentials. "You’ve got to eat your vegetables." - discussed systemic reform, earn-while-you-learn models, and unbundling education We also discussed the Future Skills Organisation Report ""
/episode/index/show/aiineducationpodcast/id/38321630
info_outline
AI Landgrabs, Lawsuits & Learning: AI in Education
09/18/2025
AI Landgrabs, Lawsuits & Learning: AI in Education
In this news and research episode, Ray and Dan unpack a whirlwind of global developments in AI and education. From major US announcements, like Microsoft, Google, and Amazon offering free AI tools and training for students, to Australia's push for sovereign AI infrastructure, it’s clear the AI education landscape is shifting fast. They explore the massive copyright settlement involving Anthropic and the controversial Books3 dataset, dig into what AI is actually trained on, and consider the implications of training data transparency. They also spotlight Georgia Tech’s Jill Watson project and a new study comparing different RAG (Retrieval-Augmented Generation) strategies (essential reading for anyone building AI tutors or educational bots?) Plus: Google’s AI Quests, Australia’s new social media ban for under-16s, OpenAI’s new certifications, and a growing global interest in culturally specific AI models. News - Links White House AI Education Task Force https://www.theverge.com/policy/772084/amazon-google-microsoft-white-house-ai-education Key tech company commitments announced: Microsoft: Google: Amazon: Digital Education Council's "AI in the Workplace 2025" report Microsoft becomes best buddies with Anthropic Anthropic settled copyright lawsuit OpenAI announces new certification Greece give ChatGPT Edu to high school students OpenAI's Global Faculty AI Project Some of the ideas that are there include: Brinnae Bent, PhD (Duke University) on “Hack Your Grade,” an experiential assignment where students try to outsmart a chatbot, building AI literacy and critical thinking through hands-on practice. David J. Malan (Harvard University) on creating a “virtual rubber duck” (Dan, do you know what 'rubber ducking' is?) debugging system that guides computer science students instead of giving them the answers. andre j. hermann (Houston City College) on using AI in the photo studio to design real-world assignments that teach both technical craft and career skills like creative briefs, brainstorming, and execution. Marcos Rojas Pino, MD (Stanford University) on Clinical Mind AI, which grew from a custom GPT into a multilingual simulation platform to run realistic patient encounters, making high-quality clinical reasoning training accessible to all health professions students. Google launches AI Quests to teach AI literacy to students aged 11 to 14 Australian eSafety changes on social media (and potentially AI) Sovereign Australia AI launch Research - Links Georgia Tech’s Jill Watson Outperforms ChatGPT in Real Classrooms Aligning LLMs for the Classroom with Knowledge-Based Retrieval -- A Comparative RAG Study
/episode/index/show/aiineducationpodcast/id/38260835
info_outline
Simon Breakspear on AI - Slow Down: AI, Learning, and the Cognitive Escalator
09/11/2025
Simon Breakspear on AI - Slow Down: AI, Learning, and the Cognitive Escalator
In this weeks podcast Dan and Ray have a conversation about education with Dr Simon Breakspear. Simon is a globally respected expert in educational leadership and innovation, known for his insightful work on transforming learning environments. His forward thinking ideas around education are reshaping the way schools and school systems are thinking about teaching, learning and leadership. In this episode we talk about: Distinction Between Productivity and Pedagogical Uses of AI: Simon, Dan, and Ray discussed the critical distinction between using AI for productivity tasks in education (such as administrative work and report summarisation) and for pedagogical purposes, emphasising that while AI can greatly enhance productivity for adults, its use in learning processes for students requires careful consideration to avoid undermining cognitive development. Human Development and the Role of Analogue Learning: Simon argued that foundational human development—such as reading, writing, and critical thinking—should precede the use of AI in learning, with Dan and Ray supporting the view that analogue learning experiences are crucial for building the cognitive and personal skills necessary for effective future use of AI. Guidance for School Leaders and Teachers on Navigating AI Integration: Ray and Dan sought practical advice from Simon for school leaders and teachers facing pressure to adopt AI, with Simon recommending a cautious, evidence-based approach that prioritises human development, leverages AI for productivity gains, and introduces AI into learning processes only where it demonstrably enhances educational outcomes. Changing Role of Teachers in the Age of AI: Ray questions whether the role of teachers must change with AI, and Simon responded that while some administrative tasks may be automated, the core human functions of teaching—motivating, engaging, and forming students—will become even more critical, with teachers needing to exercise professional judgement about when and how to use AI in the classroom. Ethical and Equity Considerations in AI Adoption: Dan and Simon discussed the ethical implications and potential inequities arising from AI adoption in education, highlighting concerns that uneven access and premature augmentation could disadvantage certain groups of students and create disparities in skills and opportunities. Practical Strategies for Selective AI Integration: Simon provided practical strategies for integrating AI into education, recommending that schools focus on specific, evidence-based learning processes where AI can add value, such as feedback and retrieval practice, and to avoid being overwhelmed by the proliferation of AI tools. Long-Term Purpose of Education Amidst Technological Change: Simon concluded that the ultimate goal of education is not solely economic productivity but the holistic formation of human beings, arguing that enduring human skills, identity, and community are essential for resilience in an unpredictable future, regardless of technological advancements. Links: The pruning principle: Research and Historical References Mentioned • Learning Science & Cognitive Development: Simon referenced the work of Kirschner and Sweller on learning as a change in long-term memory, including declarative and procedural knowledge. • Lindy Effect (Nassim Taleb): Simon discusses the Lindy effect, suggesting that things valuable for a long time (like bicycles or spoons) are likely to remain valuable, as a way to think about educational priorities amid rapid technological change. • Daisy Christodoulou: Simon cites Daisy Christodoulou’s perspective that while AI may be a better writer, it cannot know what you actually think, emphasizing the importance of writing as a way to learn how to think. | • Historical Technology Adoption in Education: Simon refers to the rollout of one-to-one devices and mobile phones in schools, highlighting unintended consequences for attention and learning, and drawing lessons for AI adoption. • Mathematics Education Practice: Simon references the established practice of delaying calculator use in mathematics until foundational skills are developed, as an analogy for AI use in learning. • DeepMind CEO on Coding: Simon mentions a recent interview with the CEO of DeepMind, who argues that understanding how things work is necessary, even if AI can code better than humans.
/episode/index/show/aiineducationpodcast/id/38111625
info_outline
7 Surprising Truths About How Students Really Use AI
09/09/2025
7 Surprising Truths About How Students Really Use AI
Introduction: Beyond the Hype and Panic For the past couple of years, the conversation around students and artificial intelligence has been dominated by a palpable sense of anxiety. We’ve all heard the headlines and the hallway chatter - fears of widespread, undetectable cheating, the slow erosion of critical thinking, and the looming threat of a "cognitive debt" where students outsource their learning and forget how to think for themselves. But in Series 12 of the we spent time listening to a wide range of students, from curious middle schoolers to ambitious university attendees. We felt our job was to tune out the noise and listen to the signal. And the signal is telling a very different story. The reality of how this generation is engaging with AI is far more nuanced, sophisticated, and frankly, more hopeful than the panic suggests. It's a story of active, often brilliant, adaptation. Here are the seven most surprising truths we learned, directly from them. 1. The Great "Cheating Panic" Is Largely a Misunderstanding While it’s true that some students use AI to cheat, the fear of a generation of plagiarists is largely overblown and misinterprets how students are actually engaging with these tools. Groundbreaking research from Dr. Anna Denejkina [] reveals a reality that challenges the common narrative: the vast majority of students, around 80%, state they have not plagiarised using AI and have no intention of doing so. More importantly, Dr. Denejkina uncovered a crucial, counter-intuitive insight. Many students who think they might be plagiarising are, in fact, using AI for perfectly legitimate learning activities. They're workshopping ideas, brainstorming essay structures, and checking their grammar - processes we would celebrate if they were done with a peer or a tutor. This is compounded by a very real fear of being falsely accused of cheating, a significant concern noted by Jake Turnbull from Pymble Ladies' College []. The core issue isn't a sudden decline in academic integrity, but a generation left confused and anxious by a lack of clear institutional guidelines. As Dr. Denejkina asked, “It sounds like any use of generative AI for schooling for learning to them is plagiarism? So where have we gone wrong that young people are thinking they're plagiarising when they're actually not?” 2. They Want an AI Tutor, Not a Cheat Sheet Overwhelmingly, students are turning to AI not to bypass their work, but to find a space for immediate, non-judgmental help to better understand it. They are seeking a patient, on-demand tutor that can fill in the gaps left by traditional classroom instruction. Data from the Chegg survey [] was striking: when students struggle academically, 29% turn to generative AI first for support. In contrast, only 8% turn to their professors first. It’s not a rejection of their teachers, but a search for a safe space to be vulnerable. This explains why, in a Harvard Business School case study [], a custom AI tutor was primarily used for "concept breakdown" and to ask questions students were "too embarrassed to raise in front of 90 of my peers." This desire for genuine learning was made explicit at Thomas Blackwood’s school [], where students involved in developing a custom AI told the developer in no uncertain terms: We want the AI to teach us and not give us the answer. It’s a sentiment perfectly embodied by 12-year-old Megan [], who uses AI for her math homework. She doesn't just ask for the solution; she specifically prompts it to "explain how you did this" and, if necessary, simplify the explanation to a "year one" level. With AI, there is no judgment, only help. 3. They're Using AI to Become More Creative, Not Less One of the most persistent fears is that AI will atrophy creativity, replacing human imagination with machine-generated mediocrity. The story of Caitlin, a Year 11 student, [] completely flips that script. For an English creative writing assignment that required a video component, Caitlin used an AI video generator. Where previous cohorts had relied on the same handful of clips "they found off YouTube or Clickview" that were "just boring," she was able to produce unique, high-quality visuals that brought her story to life. Crucially, she wrote the entire story herself. The AI wasn't a replacement for her creativity; it was a tool to enhance and visualise it. The process demanded more from her, not less. To get the AI to produce the exact scenes she imagined, she had to engage in an iterative problem-solving process, refining her own descriptive writing when the AI failed. As she explained, "I've had to redescribe it and tell AI... 'I don't want this. That was a bad idea.'" In this case, the AI wasn't a crutch; it was a creative collaborator that demanded a higher level of skill from its human partner. 4. They're Becoming Masters of a New Skill: AI Orchestration Students like Caitlin [] aren't just using one AI in isolation; they are developing sophisticated workflows that chain multiple tools together to achieve a complex goal. This is a new, self-taught form of digital literacy: AI orchestration. Caitlin didn't just type a simple command into the video generator. First, she went to ChatGPT to help her craft a detailed, descriptive text prompt. She then fed that highly refined prompt into a separate AI video generator to get the best possible output. She was using one AI to prompt another. This is a complex, problem-solving skill that students are developing organically, far ahead of any formal curriculum. They are learning how to manage a team of specialised AI assistants, assigning the right task to the right tool to achieve their vision. 5. AI Is Their 24/7 Coach for Academics, Life, and Well-Being For this generation, AI is becoming a ubiquitous assistant that extends far beyond the classroom. It's an academic coach, a life coach, and a well-being support tool, available 24/7. The academic coaching is clear, perfectly captured in Brett Moller’s story [] of his daughter, who, after getting a math exam back, took a photo of a question she got wrong and prompted her AI, "Please help me, why did I get this wrong?" But its role as a "life coach" is just as significant. As a report in The Guardian noted [], students are using AI for everything from writing internship applications to getting dating advice. The support even extends to mental well-being. Twelve-year-old Megan [] shared how she turns to AI when she gets stressed with her dancing: "I ask it, can you help me calm myself down? And it's like, sure, take a few deep breaths and stuff like that." For many students, AI is becoming a trusted and versatile first point of contact for a wide range of personal and professional challenges. 6. Students Are Actually Teaching the Teachers In a fascinating power inversion, students are often the most knowledgeable AI experts in the classroom, leading to moments where they are teaching their own teachers. After Caitlin [] demonstrated the stunning videos she had created for her English project, her teacher was so impressed that she wanted to learn how. Caitlin ended up holding an impromptu "master class" for her entire English class, writing the steps on the board for everyone to follow. This isn't an isolated incident. At Jake Turnbull's school [], students ran an entire professional development day for 500 staff members, demonstrating how they use AI in their learning. This role-reversal highlights the incredible pace of technological adoption and underscores the importance of valuing and integrating student expertise into our educational models. 7. Clear Rules and Safe Tools Foster Trust and Responsibility When institutions provide clear guidance and safe, sanctioned tools, students respond with greater trust and more responsible behavior. The confusion that plagues many students is replaced by confidence. Caitlin's school [], for example, uses a "stoplight system" (red for no AI, yellow for polishing and ideas, green for encouraged use). This simple framework removes ambiguity, helps students feel trusted, and empowers them to use AI without fear of accidental wrongdoing. At Brett Moller's school [], student prompts are monitored - not to punish, but to identify opportunities to teach them how to become better, more effective prompters. And at All Hallows' School , students noted that they trust the school-provided Gemini account far more than other public tools. The core paradox is clear: providing structure and guardrails doesn’t stifle student use of AI, but rather unleashes it by giving them the confidence to experiment responsibly. Conclusion: Learning from the Learners If we take the time to listen, it becomes clear that students are using AI in ways that are far more constructive, sophisticated, and hopeful than we often give them credit for. They are not passive consumers waiting for an easy answer. They are strategists like Caitlin, orchestrating multiple AIs to bring a creative vision to life, and lifelong learners like Brett Moller's daughter, turning a moment of failure on an exam into an opportunity for understanding. They are active, critical, and creative users who are navigating a complex new landscape with ingenuity. Instead of asking how we can stop students from using AI, perhaps the better question is: what can we learn from them about how to use it well?
/episode/index/show/aiineducationpodcast/id/38137880
info_outline
AI Study Buddies, Compliance Cheaters, and the Rise of Corella
09/04/2025
AI Study Buddies, Compliance Cheaters, and the Rise of Corella
In this episode of the AI in Education Podcast, Ray and Dan dive into the latest news, tools, and research transforming education through AI. From ChatGPT’s agent and study modes to Google’s new Nano Banana image tool and Grammarly’s army of AI agents, there’s no shortage of innovation—or controversy. They unpack OpenAI’s ambitious Learning Accelerator in India and explore how Australian schools are rolling out Corella, an AI assistant aimed at reducing teacher workload. The duo also discuss new research including the Microsoft 2025 AI in Education report, the Tech Council of Australia’s workforce study, and that eye-catching MIT headline: “95% of AI projects fail.” Plus: Why students might start challenging teachers with Grammarly's grading predictions, how image tools are making reality harder to spot, and what a new AI supercomputer in Melbourne means for the future. The OpenAI Learning Accelerator in India ChatGPT Study Mode Google's nano banana Ronnie Chieng's video about Boomers falling for AI If you love this, you'll also love his whole special on Netflix, called "Ronny Chieng: Love to Hate It" Grammarly AI Agents (and specifically AI Grader) Microsoft’s 2025 AI in Education report Queensland to Roll Out AI Tool “Corella” in Schools Australian AI workforce study from TCA Dan mentioned some research about junior jobs being more impacted by AI than senior jobs. Here's a thread to read on that research, and the link to the original paper: And if you want to go deep into the topic, then there's some excellent analysis written by Noah Smith here: Monash/Nvidia new AI supercomputer https://www.theage.com.au/technology/nvidia-supercomputer-marks-new-era-for-australian-ai-20250813-p5mmjo.html Power-hungry data centres scrambling to find enough electricity to meet demand https://www.abc.net.au/news/2024-07-26/data-centre-electricity-grid-demand/104140808 Research The GenAI Divide: State of AI in Business 2025 Future of Work with AI Agents: Auditing Automation and Augmentation Potential across the U.S. Workforce ChatGPT in Education: An Effect in Search of a Cause
/episode/index/show/aiineducationpodcast/id/38081095
info_outline
Be Human First: Leading with Purpose in an AI World - Dr Sophie Fenton
08/28/2025
Be Human First: Leading with Purpose in an AI World - Dr Sophie Fenton
What does it really mean to lead in the age of AI? In this deeply insightful episode, from joins us to explore the human side of AI adoption in education and the workplace. From her unique background spanning classroom teaching, school leadership, university strategy, and corporate consulting, Sophie offers a compelling case for starting with the human, not the tool. We talk about what it means to create intentionally human contexts for AI, how leadership styles need to adapt, and why soft skills are quickly becoming our most powerful asset. Sophie unpacks the concept of "human experience" departments, AI ethics, and why education must return to its philosophical roots if we’re to thrive in a digital future. Links • Microsoft Work Trend Index report: • Sophie referenced “10 of the 15 fastest growing skills are soft skills” infographic that she shared on LinkedIn:
/episode/index/show/aiineducationpodcast/id/37998815
info_outline
AI Tutors, Bias & GPT-5: What Just Happened?
08/21/2025
AI Tutors, Bias & GPT-5: What Just Happened?
In this episode of the AI in Education Podcast, Dan and Ray dive into the latest developments shaping the future of AI in learning environments - from vocational colleges to elite universities. All the links to items and research discussed are below! News Australia's Future Skills Organisation and Microsoft launched the FSO Skills Accelerator-AI partnership Microsoft Elevate Google commits US$1bn for AI training at US universities CAUDIT Top Ten 2025 South Korea pulls plug on AI textbooks Consumer news reporting on AI in Education ABC Channel 9 Productivity Commission report that highlights the use of AI in education, including to reduce teacher workload New DFE AI guidance for schools Ofsted's findings on AI in Education Research AI tutoring outperforms in-class active learning: an RCT introducing a novel research-based design in an authentic educational setting AI tools used by English councils downplay women’s health issues Original paper: News reporting:
/episode/index/show/aiineducationpodcast/id/37887195
info_outline
Gen Z to Gen AI: What Students Really Think – with Dr Anna Denejkina
08/14/2025
Gen Z to Gen AI: What Students Really Think – with Dr Anna Denejkina
In this episode of the AI in Education Podcast, hosts Dan and Ray welcome back one of their favourite researchers, , to unpack her latest study, "". This and research collaboration, on the use and attitudes to AI by students, explores how young Australians are adopting generative AI, and what they really think about it. From career pivots and skill confidence gaps to plagiarism misconceptions, Dr Denejkina shares powerful insights into the realities behind the headlines. Discover why 30% of Gen Z students are rethinking their career plans, how gender influences AI confidence, and why students are asking for clear, practical guidance on what is, and isn’t, acceptable AI use. The conversation also covers deepfakes, creative industry disruption, and how students themselves are calling for more inclusive and representative decision-making in AI development. If you've ever wondered "How do students use AI?", then this is the episode (and podcast series!) for you.
/episode/index/show/aiineducationpodcast/id/37754535
info_outline
AI Study Mode, Super Tools & Future Jobs
08/07/2025
AI Study Mode, Super Tools & Future Jobs
This episode covers a massive amount of AI-related news and research, especially developments over the past week, including ChatGPT's new "study mode," major platform announcements from Google, Microsoft, and OpenAI, generative video and music tools, and the implications of AI on jobs and education. The episode also highlights a new Stanford GenAI for Education hub and discusses current AI policy and access initiatives globally. News ChatGPT Study Mode: Simon Willison info on the system prompt for Study Mode: Dr Philippa Hardman's LinkedIn post on Study Mode: Google announced “Guided Learning” mode for Gemini Open models by OpenAI Google released Gemini Storybooks ElevenLabs dropped a new multi-lingual music generation model OpenAI giving ChatGPT Enterprise to every US Federal Government department for $1 a year Microsoft Copilot released for 13+ students 18 months. 12,000 questions. A whole lot of anxiety. What I learned from reading students’ ChatGPT logs The Presidential AI Challenge Research GenAI for Education Hub at Stanford University Working with AI: Measuring the Occupational Implications of Generative AI Prompting Science Report 3: I'll pay you or I'll kill you - but will you care?
/episode/index/show/aiineducationpodcast/id/37715945
info_outline
Not Cheating - Learning: Students on Using AI Ethically
07/31/2025
Not Cheating - Learning: Students on Using AI Ethically
In this eye-opening episode of the podcast, hosts Ray and Dan are joined by a remarkable group of students from . Part of the student-led series 12, this conversation dives into how teens are really using AI in their schoolwork. And it’s far more sophisticated than you might think. From using Gemini and ChatGPT to find historical sources and generate study guides, to marking their own assignments with AI before submission, these students aren’t cutting corners. They talk openly about how they’re finding smarter ways to learn. The discussion explores ethical concerns like plagiarism, cheating, and data privacy, as well as creative uses of AI in coding, English, and even video production.
/episode/index/show/aiineducationpodcast/id/37614420
info_outline
From Harvard bots to Aussie classrooms: the state of AI in education
07/24/2025
From Harvard bots to Aussie classrooms: the state of AI in education
From Harvard bots to Aussie classrooms: the state of AI in education In this solo news and research roundup, Ray covers some of the biggest recent stories in AI and education – from high-profile case studies to practical tools for teachers and real concerns from students. Links for what’s in this week's episode: New AI training modules for Australian teachers Education Services Australia has released a set of lesson-ready resources to help teachers guide students in safe, effective AI use – including interactive activities and a certificate aligned with AITSL standards. Harvard Business School’s chatbot tutor A deep dive into how Harvard embedded a custom AI tutor into its accounting course – used by 75% of students, with a surprising impact on learning and classroom discussion. Student appeals and the limits of AI detection tools The UK’s higher ed ombudsman has warned universities about relying too heavily on detection software, especially when it comes to vulnerable student groups. OIA article: Case summaries: TES Report: ASBA AI Preparedness Survey - June 2025 A recent survey of 50 independent schools shows most are still playing catch-up on AI governance, staff training, and clear policies. NSW Auditor report on universities for 2024 Canvas and ChatGPT join forces Instructure has announced native AI integration into Canvas LMS, including AI-powered assignments that track learning progress directly into the gradebook. Scientists reportedly hiding AI text prompts in academic papers to receive positive peer reviews Reports emerge of researchers embedding hidden prompts into papers to game AI peer review tools. What students really think New surveys from the UK and Australia show that students are keen to use AI, but feel underprepared and unsupported. They want more guidance – not just for study, but for their future careers. UK: Australia:
/episode/index/show/aiineducationpodcast/id/37530555
info_outline
AI, Assessments & A-Graders: A Student's POV
07/17/2025
AI, Assessments & A-Graders: A Student's POV
AI, Assessments & A-Graders: A Student's POV What happens when AI tools like ChatGPT and Google’s VEO3 land in the hands of curious, creative high schoolers? In this episode, we talk with Caitlyn, a Year 11 student from Sydney, who gives us a front-row seat to how students are really using AI in school right now. From refining English essays with high-modality language to producing eye-catching videos that bring creative writing to life, Caitlyn shows how AI has become her on-demand study buddy, grammar coach, and even a research assistant. She unpacks how her school’s “traffic light system” guides ethical AI use and reveals what happens when students take ownership of the tools and the learning. Caitlyn’s perspective offers a grounded, inspiring look at the future of learning with AI. Links , that created the change momentum. - only currently available to paid or subscribers. The (which Caitlin calls the Traffic Light System, although the AIAS team have moved away from that model a bit) for appropriate AI use in assignments. Ray mentioned
/episode/index/show/aiineducationpodcast/id/37435645
info_outline
Union-Powered AI: PD Meets Big Tech
07/10/2025
Union-Powered AI: PD Meets Big Tech
Union-Powered AI: PD Meets Big Tech This solo news-and-research roundup unpacks a blockbuster fortnight for AI in education, highlighting a landmark partnership between U.S. teacher unions and tech companies that could rewrite professional development. Headlines AFT × UFT × Microsoft/OpenAI/Anthropic – launch the National Academy for AI Instruction (US $23 M) to train 400,000 educators as AI-savvy professionals Announcement: AI Instruction website: Google “Gemini for Education” – 30 generative tools for lesson design, rubrics & more (ISTE 2025 reveal). Blog post: 21 page announcement PDF: White House “EDAI” pledge – 67 “AI Education & Workforce Champions” commit resources for schools And a questioning post from Microsoft Elevate & AI Economy Institute – New Microsoft philanthropy arm plus research fellowships on AI, work & learning. Microsoft Announcement: Microsoft Elevate: AI for Good Lab: Research spotlights Teaching for Tomorrow (Walton Foundation × Gallup): 60% of U.S. teachers using AI save 5.9 hrs/week, yet only 19% of schools have policies. Simple Techniques to Bypass Gen-AI Detectors (Perkins et al., 2024): popular text detectors flag 50% of genuine student work as AI. Jisc briefing: even 1% false-positive rates could falsely accuse 4,800 papers/year at a 20 k-student university. Riding the Tiger of AI Feedback (Aus. multi-uni, 2025): 50% of students already use AI for feedback; teachers still ranked “more helpful.”
/episode/index/show/aiineducationpodcast/id/37357065
info_outline
AI by Students, for Students
07/03/2025
AI by Students, for Students
Episode Series 12 Episode 8 AI by Students, for Students In this special episode of the AI in Education Podcast, recorded live at the , Dan and Ray chat with two forward-thinking school leaders about how students are using AI in transformative ways. First up, , Head of ICT at , shares how his school built its own AI tool – "Annabelle" – to give students a tailored, ethical, and effective AI tutor. From exam revision to building study guides, students are using it creatively and proactively, showing just how valuable thoughtful AI implementation can be. Then, , Digital Learning Leader at , discusses empowering students through leadership and voice. From running AI professional learning for 500 teachers to organising student-led symposiums, Jake highlights how students are already shaping the future of AI in education. Links During the conversation we mentioned: Alpha School in Austin, Texas: Khan Academy's AI Tutor Khanmigo:
/episode/index/show/aiineducationpodcast/id/37268665
info_outline
Does AI make you dumb?
06/26/2025
Does AI make you dumb?
After starting with an existential crisis - "Are we basically doing the AI equivalent of a maths calculator podcast from the 1970s?" - in this news and research update, Dan and Ray unpack the latest developments in AI and education. Starting with China’s decision to shut down AI tools during national exams, they then revisit NSW’s EduChat chatbot, now in widespread use, with compelling data on time savings for teachers and learning benefits for students. The hosts dive into fresh research from the LEGO Foundation and Microsoft, both highlighting how young students engage with generative AI—and the equity and creativity issues that come with it. They also tackle the viral MIT study suggesting AI could cause "cognitive debt" and discuss why such claims should be taken with academic caution. Finally, Dan and Ray trace the recurring media fear that each new technology - from books to bicycles - has been accused of making us stupid. As always, they bring wit, warmth, and real insight into how AI is shaping education. Links and references for the studies, news and research discussed: News China shuts down AI tools during nationwide college exams [, ] Research [] [ - - ] And finally For your enjoyment, Donald Clark's "" aka What's making us dumb this time? And if you want more enjoyment like Donald's article, then you'll love or
/episode/index/show/aiineducationpodcast/id/37159380
info_outline
We finally meet Megan and James
06/20/2025
We finally meet Megan and James
We're halfway through Season 12 - the Student one - and in episode 6 you finally get to meet Megan (Year 7) and James (Year 11). Long-term listeners will have heard co-host Dan talking about how his own children use AI, and so now you can hear their perspectives. Dan asks both of them share how they and their friends use AI in school and outside. Enjoy the honest revelations!
/episode/index/show/aiineducationpodcast/id/37085780
info_outline
Gen Z's AI Reality
06/13/2025
Gen Z's AI Reality
In this news and research-packed episode, Ray and Dan dive deep into the AI highlights from EduTech 2025 in Sydney - reflecting on the vibe, standout presentations, and the surprisingly light AI presence on the expo floor. They unpack major news from the UK’s Department for Education, OpenAI’s model pricing shake-up, and raise serious red flags over Meta AI’s privacy approach. The duo also tackles the big questions educators face: is AI destroying the planet? Can we trust AI with student data? And what do students themselves think? Featuring insights from two key research pieces - Australia’s "From Gen Z to Gen AI" and Jisc’s UK-based "Student Perceptions of AI 2025" - this episode reveals how students are using AI, what they’re worried about, and why institutions need to catch up. Links: OpenAI releases o3-pro () and slashes the OpenAI , and details of Meta.ai privacy approach - so far, only covered by Energy/water usage of ChatGPT In he said "the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes." (about ) 's excellent blog post we discussed: "" which also contains the great comparison charts for electricity and water use for common activities. Details on Karen's upcoming book, Research on students' AI attitudes and use
/episode/index/show/aiineducationpodcast/id/36979500
info_outline
Students as AI Innovators, with Brett Moller
06/05/2025
Students as AI Innovators, with Brett Moller
Students as AI Innovators In this inspiring episode of the podcast, hosts Ray and Dan speak with at on the Sunshine Coast. Brett shares how his school is flipping the AI narrative — from fear and compliance to student agency, creativity, and real-world problem-solving. He discusses how students are not only using AI tools but building their own large language models, crafting apps that respond to deeply personal challenges like Parkinson’s disease and anxiety, and collaborating with local industry on meaningful tech projects. From AI-powered research assistants to empathy-led app design, Brett’s stories highlight a future where students are not just consumers, but creators of AI solutions. This episode is packed with powerful insights on ethical AI use, teacher transformation, and the evolving role of libraries, educators, and students in the AI era. Learn more about the St Andrew's story: Links to things mentioned in this episode: - Dr Ken Kahn's book
/episode/index/show/aiineducationpodcast/id/36833420
info_outline
Agents, Optimism & Essays: The Real AI Student Life
05/29/2025
Agents, Optimism & Essays: The Real AI Student Life
In this episode of the AI in Education Podcast, Dan and Ray dive deep into how students are really using - beyond the hype. They unpack recent findings from the , exploring how students interact with models like Claude for study, writing, and even problem-solving. They discuss the latest sentiment data from a KPMG/University of Melbourne Business School report, "", revealing surprising differences in global optimism and concern about AI. (See the chart below, which isn't in the report, but Ray's creation from Figure 15) Plus, they share updates from Google I/O and Microsoft Build, highlight emerging trends in multi-agent systems, and reflect on how AI tools like VO are reshaping content creation. From skeptical spouses to the evolving role of educators, this episode blends data, insight, and laughs.
/episode/index/show/aiineducationpodcast/id/36761460
info_outline
Students First: How AI Is Changing Study Habits
05/22/2025
Students First: How AI Is Changing Study Habits
In this episode of the AI in Education Podcast, hosts Ray and Dan kick off Series 12 with a powerful focus on students—how they learn, what they need, and how AI is shaping their academic journeys. Joining them is at , who shares revealing insights from , which polled over 11,000 students across 15 countries. Nina dives into how students are turning to generative AI tools like ChatGPT more than their professors, not out of laziness but to fill gaps in clarity, access, and support. The trio explores the need for AI that’s student-specific - not just curriculum-aligned - and the importance of pedagogical design in educational tech. They also tackle key issues of equity, mental health, and the real-world skills students want for future workplaces. Links:
/episode/index/show/aiineducationpodcast/id/36653005
info_outline
Tutor, Teacher, Cheater: What Students Really Think About AI
05/15/2025
Tutor, Teacher, Cheater: What Students Really Think About AI
New Series Alert - all about students In the kickoff episode of Series 12, Dan and Ray set the stage for a deep dive into AI from the student's perspective. Why are students confused about AI? How are they actually using it - and how should they be using it? The hosts explore the idea that AI can act as a tutor, a teacher, or a shortcut (a "cheater") and reflect on how this plays out in real classrooms and learning experiences. They also caught up on plenty of news this week - transdisciplinary learning models, new AI education policies from countries like China and the UAE, and how major tech players like Microsoft and Google are adapting their tools for younger learners. The episode also highlights a new meta-analysis on ChatGPT’s effect on student learning. The meta analysis paper is Finally, there's two things we ask you, the listener, for in this episode: If you haven't already, pop into your podcast app and give us a rating and review Help shape this season by connecting the podcast with students and researchers who can share insights into real-life student experiences with AI Have feedback or want to get involved? Email us at
/episode/index/show/aiineducationpodcast/id/36575710
info_outline
From Hollywood to Healthcare: AI That Cares
05/08/2025
From Hollywood to Healthcare: AI That Cares
In this special solo-hosted episode, is joined by Dr. Mike Seymour from the University of Sydney, recorded live at the . You can find Mike through and Mike shares captivating insights from his work in digital humans - lifelike AI avatars that can support learning, healthcare, and emotional wellbeing. From using VR to train veterinary students with virtual sheep, to exploring how emotionally intelligent digital tutors can transform how students ask questions, this episode dives deep into the practical, human-centered future of AI. Mike draws on his background in Hollywood visual effects and his current research to make a compelling case: AI isn't a single tool - it's a system, a landscape, a way to augment and extend human capability. The conversation ranges from the ethics of AI in healthcare to using AI for dubbing films, always returning to a central theme: how technology can support, not replace, people. And when you've listened to this podcast, jump across to Mike's two podcasts - the and - which both take you on a journey to Hollywood and visual effects There are so many resources that you can read to dive into this story further: Mike's - researching AI Digital Humans Look at the timestamps on the videos - amazing videos from 5 years ago that we can now do in consumer apps, like this and the - amazing that it's gone from bleeding edge to available in everyday consumer apps in half a decade! Whether you're an educator, technologist, or just curious about the future, this episode is packed with ideas that will stick with you.
/episode/index/show/aiineducationpodcast/id/36481710
info_outline
Uber Prompts and AI Myths
05/01/2025
Uber Prompts and AI Myths
In this episode of the AI in Education Podcast, Ray and Dan return from a short break with a packed roundup of AI developments across education and beyond. They discuss the online launch of the AEIOU interdisciplinary research hub that Dan attended, explore the promise and pitfalls of prompt engineering—including the idea of the “Uber prompt”—and share first impressions of the OpenAI Academy. Ray unpacks misleading headlines about Bill Gates “replacing teachers” with AI and instead spotlights the real message about AI tutors. They also dive into the 2027 AI forecast report, the emerging impact of the EU AI Act, and Microsoft's latest Work Trend Index, which introduces the idea of "agent bosses" in the AI-driven workplace. And then round off with Ben Williamson’s list of AI fails in education and a startling story of an AI radio presenter nobody realised was fake. Here's all the links so you too can fall down the AI news rabbithole 😊 AI in Education at Oxford University interdisciplinary research hub OpenAI Academy Introduction to ChatGPT Edu: Your AI-Powered Academic Companion AI for Academic Success: Research, Writing, and Studying Made Easier Mastering Prompts: The Key to Getting What You Need from ChatGPT AI 2027 Forecast Report 2025 EDUCAUSE Students and Technology Report: Shaping the Future of Higher Education Through Technology, Flexibility, and Well-Being Melbourne & KPMG AI Sentiment Survey Microsoft Work Trend Index 2025 Ben Williamson’s AI Fails in Education on LinkedIn This links to all the individual stories, so head to Ben's story, give it a like, and then click through to the warnings from the stories! Australian Radio Network digitises diversity with an artificially generated Asian female presenter
/episode/index/show/aiineducationpodcast/id/36385490