loader from loading.io

Rapid Rundown - Another gigantic news week for AI in Education

AI Education Podcast

Release Date: 11/19/2023

Series 8 opener - Assessment show art Series 8 opener - Assessment

AI Education Podcast

It's time to start a new series, so welcome to Series 8! This episode is the warm up into the series that's going to be focused on Assessment. We'll interview some fascinating people about what's happening in school and university assessment, how we might think differently about assessing students, and what you can be thinking about if you're a teacher. There's no shownotes, links or anything else for your homework for this episode - just listen and enjoy! Dan and Ray

info_outline
News & Research Roundup 28 March show art News & Research Roundup 28 March

AI Education Podcast

The season-ending episode for Series 7, this is the fifteenth in the series that started on 1st November last year with the "Regeneration: Human Centred Educational AI" episode. And it's an unbelievable 87th episode for the podcast (which started in September 2019). When we come back with Series 8 after a short break for Easter, we're going to take a deeper dive into two specific use cases for AI in Education. The first we'll discuss is Assessment, where there's both a threat and opportunity created by AI. And the second topic is AI Tutors, where there's more of a focus on how we can take...

info_outline
The University of Sydney's Cogniti AI bot show art The University of Sydney's Cogniti AI bot

AI Education Podcast

This week we talked with Professor Danny Liu and Dr Joanne Hinitt, of The University of Sydney, about the Cogniti AI service that's been created in the university, and how it's being used to support teaching and learning. Danny is a molecular biologist by training, programmer by night, researcher and academic developer by day, and educator at heart. He works at the confluence of educational technology, student engagement, artificial intelligence, learning analytics, pedagogical research, organisational leadership, and professional development. He is currently a Professor in the Educational...

info_outline
March News and Research Roundup show art March News and Research Roundup

AI Education Podcast

It's a News and Research Episode this week    There has been a lot of AI news and AI research that's related to education since our last Rapid Rundown, so we've had to be honest and drop 'rapid' from the title! Despite talking fast, this episode still clocked in just over 40 minutes, and we really can't out what to do - should we talk less, cover less news and research, or just stop worrying about time, and focus instead on making sure we bring you the key things every episode?     News More than half of UK undergraduates say they use AI to help with essays This...

info_outline
Is AI the saviour of teaching? Leanne Cameron's perspective on AI across the teaching profession show art Is AI the saviour of teaching? Leanne Cameron's perspective on AI across the teaching profession

AI Education Podcast

This week's episode is our final interview recorded at the AI in Education Conference at Western Sydney University at the end of last year. Over the last few months you have had the chance to hear many different voices and perspectives Leanne Cameron, is a Senior Lecturer in Education Technologies, from James Cook University in Queensland. Over her career Leanne's worked at a number of Australian universities, focusing on online learning and teacher education, and so has a really solid grasp of the reality - and potential - of education technology. She explores the use of AI in lesson...

info_outline
News Rapid Rundown - December and January's AI news show art News Rapid Rundown - December and January's AI news

AI Education Podcast

This week's episode is an absolute bumper edition. We paused our Rapid Rundown of the news and research in AI for the Australian summer holidays - and to bring you more of the recent interviews. So this episode we've got two months to catch up with! We also started mentioning Ray's AI Workshop in Sydney on 20th February. Three hours of exploring AI through the lens of organisational leaders, and a Design Thinking exercise to cap it off, to help you apply your new knowledge in company with a small group. Details & tickets here:  And now, all the links to every news article and research...

info_outline
The Impact of AI in Higher Education: Interviews show art The Impact of AI in Higher Education: Interviews

AI Education Podcast

In this second episode of 2024, we bring you excerpts from interviews conducted at the AI in education conference at Western Sydney University in late 2023. In this week's episode, we dive deep into the world of AI in higher education and discuss its transformative potential. From personalised tutoring to improved assessment methods, we discuss how AI is revolutionising the teaching and learning experience. Section 1: In this interview, Vitomir, a senior lecturer at UniSA Education Futures, shares his perspective on AI in education. Vitomir highlights the major impact that generative AI is...

info_outline
Education, Data, and Generative AI - A Futurist Perspective with Kate Carruthers show art Education, Data, and Generative AI - A Futurist Perspective with Kate Carruthers

AI Education Podcast

The podcast was a special dual-production episode between the AI and Education podcast, and the , welcoming Ray Fleming and Kate Carruthers as the guests. The conversation centred around the transformation of the traditional data systems in education to incorporating AI. , the Chief Data and Insights Officer at the University of New South Wales, and Head of Business Intelligence for the UNSW AI Institute, discussed the use of data in the business and research-related aspects of higher education. On the other hand, Fleming, the Chief Education Officer at InnovateGPT, elaborated on the growth...

info_outline
Joe Dale - the ultimate Christmas AI gift list show art Joe Dale - the ultimate Christmas AI gift list

AI Education Podcast

Our final episode for 2024 is an absolutely fabulous Christmas gift, full of a lots of presents in the form of different AI tips and services  Joe Dale, who's a UK-based education ICT & Modern Foreign Languages consultant, spends 50 lovely minutes sharing a huge list of AI tools for teachers and ideas for how to get the most out of AI in learning. We strongly recommend you find and follow Joe on  or And if you're a language teacher, join Joe's Facebook group Joe's also got an upcoming webinar series on using  on Mondays - 10.00, 19.00 and 21.30 GMT (UTC) in January - 8th,...

info_outline
Revolutionising Classrooms: Inside the New Australian AI Frameworks with their Creators show art Revolutionising Classrooms: Inside the New Australian AI Frameworks with their Creators

AI Education Podcast

In todays epsiode, Inside the New Australian AI Frameworks with their Creators, we speak to Andrew Smith of ESA and AI guru Leon Furze.   This should have been the rapid news rundown, and you may remember that 20 minutes before the last rapid news rundown (two weeks ago), the new  was published. So we ditched our plans to give you a full new rundown this week, and instead found a couple of brilliant guests to talk on the podcast about the new framework, and what it means for school leaders and teachers in Australian schools. Some key links from todays episode to learn more:...

info_outline
 
More Episodes

Rapid Rundown - Series 7 Episode 3

All the key news since our episode on 6th November - including new research on AI in education, and a big tech news week!

It's okay to write research papers with Generative AI - but not to review them!

The publishing arm of American Association for Advancement of Science (they publish 6 science journals, including the "Science" journal) says authors can use “AI-assisted technologies as components of their research study or as aids in the writing or presentation of the manuscript” as long as their use is noted.  But they've banned AI-generated images and other multimedia" without explicit permission from the editors”.

And they won't allow the use of AI by reviewers because this “could breach the confidentiality of the manuscript”.

A number of other publishers have made announcements recently, including

the International Committee of Medical Journal Editors , the World Association of Medical Editors and the  Council of Science Editors.

https://www.science.org/content/blog-post/change-policy-use-generative-ai-and-large-language-models

 

Learning From Mistakes Makes LLM Better Reasoner

https://arxiv.org/abs/2310.20689

News Article: https://venturebeat.com/ai/microsoft-unveils-lema-a-revolutionary-ai-learning-method-mirroring-human-problem-solving

Researchers from Microsoft Research Asia, Peking University, and Xi’an Jiaotong University have developed a new technique to improve large language models’ (LLMs) ability to solve math problems by having them learn from their mistakes, akin to how humans learn.

The researchers have revealed a pioneering strategy, Learning from Mistakes (LeMa), which trains AI to correct its own mistakes, leading to enhanced reasoning abilities, according to a research paper published this week.

The researchers first had models like LLaMA-2 generate flawed reasoning paths for math word problems. GPT-4 then identified errors in the reasoning, explained them and provided corrected reasoning paths. The researchers used the corrected data to further train the original models.

 

 

Role of AI chatbots in education: systematic literature review

International Journal of Educational Technology in Higher Education

https://educationaltechnologyjournal.springeropen.com/articles/10.1186/s41239-023-00426-1#Sec8

Looks at chatbots from the perspective of students and educators, and the benefits and concerns raised in the 67 research papers they studied

We found that students primarily gain from AI-powered chatbots in three key areas: homework and study assistance, a personalized learning experience, and the development of various skills. For educators, the main advantages are the time-saving assistance and improved pedagogy. However, our research also emphasizes significant challenges and critical factors that educators need to handle diligently. These include concerns related to AI applications such as reliability, accuracy, and ethical considerations."

Also, a fantastic list of references for papers discussing chatbots in education, many from this year

 

 

More Robots are Coming: Large Multimodal Models (ChatGPT) can Solve Visually Diverse Images of Parsons Problems

https://arxiv.org/abs/2311.04926

https://arxiv.org/pdf/2311.04926.pdf

Parsons problems are a type of programming puzzle where learners are given jumbled code snippets and must arrange them in the correct logical sequence rather than producing the code from scratch

"While some scholars have advocated for the integration of visual problems as a safeguard against the capabilities of language models, new multimodal language models now have vision and language capabilities that may allow them to analyze and solve visual problems. … Our results show that GPT-4V solved 96.7% of these visual problems"

The research's findings have significant implications for computing education. The high success rate of GPT-4V in solving visually diverse Parsons Problems suggests that relying solely on visual complexity in coding assignments might not effectively challenge students or assess their true understanding in the era of advanced AI tools. This raises questions about the effectiveness of traditional assessment methods in programming education and the need for innovative approaches that can more accurately evaluate a student's coding skills and understanding.

Interesting to note some research earlier in the year found that LLMs could only solve half the problems - so things have moved very fast!

 

 

The Impact of Large Language Models on Scientific Discovery: a Preliminary Study using GPT-4

https://arxiv.org/pdf/2311.07361.pdf

By Microsoft Research and Microsoft Azure Quantum researchers

"Our preliminary exploration indicates that GPT-4 exhibits promising potential for a variety of scientific applications, demonstrating its aptitude for handling complex problem-solving and knowledge integration tasks"

The study explores the impact of GPT-4 in advancing scientific discovery across various domains. It investigates its use in drug discovery, biology, computational chemistry, materials design, and solving Partial Differential Equations (PDEs). The study primarily uses qualitative assessments and some quantitative measures to evaluate GPT-4's understanding of complex scientific concepts and problem-solving abilities. While GPT-4 shows remarkable potential and understanding in these areas, particularly in drug discovery and biology, it faces limitations in precise calculations and processing complex data formats. The research underscores GPT-4's strengths in integrating knowledge, predicting properties, and aiding interdisciplinary research.

 

 

An Interdisciplinary Outlook on Large Language Models for Scientific Research

https://arxiv.org/abs/2311.04929

Overall, the paper presents LLMs as powerful tools that can significantly enhance scientific research. They offer the promise of faster, more efficient research processes, but this comes with the responsibility to use them well and critically, ensuring the integrity and ethical standards of scientific inquiry. It discusses how they are being used effectively in eight areas of science, and deals with issues like hallucinations - but, as it points out, even in Engineering where there's low tolerance for mistakes, GPT-4 can pass critical exams. This research is a good source of focus for researchers thinking about how it may help or change their research areas, and help with scientific communication and collaboration.

 

 

With ChatGPT, do we have to rewrite our learning objectives -- CASE study in Cybersecurity

https://arxiv.org/abs/2311.06261

This paper examines how AI tools like ChatGPT can change the way cybersecurity is taught in universities. It uses a method called "Understanding by Design" to look at learning objectives in cybersecurity courses. The study suggests that ChatGPT can help students achieve these objectives more quickly and understand complex concepts better. However, it also raises questions about how much students should rely on AI tools. The paper argues that while AI can assist in learning, it's crucial for students to understand fundamental concepts from the ground up. The study provides examples of how ChatGPT could be integrated into a cybersecurity curriculum, proposing a balance between traditional learning and AI-assisted education.

 

"We hypothesize that ChatGPT will allow us to accelerate some of our existing LOs, given the tool’s capabilities… From this exercise, we have learned two things in particular that we believe we will need to be further examined by all educators. First, our experiences with ChatGPT suggest that the tool can provide a powerful means to allow learners to generate pieces of their work quickly…. Second, we will need to consider how to teach concepts that need to be experienced from “first-principle” learning approaches and learn how to motivate students to perform some rudimentary exercises that “the tool” can easily do for me."

 

 

A Step Closer to Comprehensive Answers: Constrained Multi-Stage Question Decomposition with Large Language Models

https://arxiv.org/abs/2311.07491

What this means is that AI is continuing to get better, and people are finding ways to make it even better, at passing exams and multi-choice questions

 

 

Assessing Logical Puzzle Solving in Large Language Models: Insights from a Minesweeper Case Study

https://arxiv.org/abs/2311.07387

Good news for me though - I still have a skill that can't be replaced by a robot. It seems that AI might be great at playing Go, and Chess, and seemingly everything else. BUT it turns out it can't play Minesweeper as well as a person. So my leisure time is safe!

 

 

DEMASQ: Unmasking the ChatGPT Wordsmith

https://arxiv.org/abs/2311.05019

Finally, I'll mention this research, where the researchers have proposed a new method of ChatGPT detection, where they're assessing the 'energy' of the writing. It might be a step forward, but tbh it took me a while to find the thing I'm always looking for with detectors, which is the False Positive rate - ie how many students in a class of 100 will it accuse of writing something with ChatGPT when they actually wrote it themself.  And the answer is it has a 4% false positive rate on research abstracts published on ArXiv - but apparently it's 100% accurate on Reddit. Not sure that's really good enough for education use, where students are more likely to be using academic style than Reddit style!

I'll leave you to read the research if you want to know more, and learn about the battle between AI writers and AI detectors

 

 

Harvard's AI Pedagogy Project

And outside of research, it's worth taking a look at work from the metaLAB at Harvard called

"Creative and critical engagement with AI in education"

It's a collection of assignments and materials inspired by the humanities, for educators curious about how AI affects their students and their syllabi. It includes an AI starter, an LLM tutorial, lots of resources, and a set of assignments

https://aipedagogy.org/

 

 

Microsoft Ignite Book of News

There's way too much to fit into the shownotes, so just head straight to the Book of News for all the huge AI announcements from Microsoft's big developer conference

Link: Microsoft Ignite 2023 Book of News