The Digital Education Futures Initiative (DEFI) hosted the inaugural Generative AI in Education conference on 16 and 17 October 2024. The event took place in Cambridge, a city renowned for its academic heritage, drawing together a diverse group of participants—academics, educators, policymakers, and thought leaders. Over two days, attendees explored the rapidly evolving role of generative artificial intelligence (AI) in education, debating its potential to reshape teaching and learning across all levels.

A new hybrid learning paradigm

The conference opened with an insightful keynote by Prof Mairéad Pratschle (University of Manchester), who provided context to the current rise of AI, where it is heading and the opportunities this affords education at all levels. Pratschle introduced the concept of a “new hybrid” relationship between learners and technology. She discussed the idea of AI as a practitioner in ideation, as a creative collaborator and learning as a dialogue. Pratschle presented an applied framework to Generative AI with 4 phases which include: phase 1—Content (Knowledge), phase 2—Design (Interaction), phase 3—Social (Community), and phase 4—Action (Autonomy). The model envisions the gradual integration of AI in education, similar to the development and adoption of the internet in education, from experimentation, growth and eventual normalisation. 

The 4 phases of AI visualise the way Generative AI is adopted in education.

A cautionary view on AI in education

In contrast, Prof Wayne Holmes offered a more critical perspective. Drawing from critical studies, Holmes argued that the growing reliance on AI could risk disempowering teachers rather than freeing them. He likened AI’s impact on teaching to the effect of GPS on human spatial reasoning—while helpful, over-reliance could erode essential skills. Rather than saving time, he warned that AI might “displace” teachers’ time, forcing them to adapt to new tools while losing control over core educational functions.

Personalisation: promise or myth?

Personalisation, one of AI’s most touted promises in education, was a hot topic throughout the conference. Presenters like Christina Supe (WriteTogether.ai), Peter Bannister (Universidad Internacional de La Rioja), and Razoun Siddiky (LEARN European Multilingual School) highlighted the potential for AI to help marginalised learners by adapting content to individual needs. However, they also cautioned that AI tools must be designed with equity in mind to avoid deepening existing inequalities.

Prof Holmes, however, was quick to critique the notion of AI-driven personalisation. He argued that true personalisation would require a fundamentally different approach to education, one that does not assume all students should reach the same outcomes. According to Holmes, AI’s current iteration often merely replicates one-size-fits-all models with superficial tweaks, rather than fostering genuinely personalised learning paths.

Rethinking assessment: process over product

The role of AI in assessment sparked significant conversation, particularly around how generative AI could shift the focus from outcomes to the process of learning. Pratschle presented the perspective of “Generativism” and the idea of assessment as a process and not an output. This approach resonated with educators grappling with formative assessment challenges. 

Another intriguing example came from Li, Gould, and Jameson (Cambridge Mathematics), who argued that generative AI could transform maths education by encouraging students to engage critically with problem-solving to reshape maths education. They suggested AI could prompt learners to ask “what if?” questions and explore multiple interpretations of a problem, fostering deeper engagement and critical thinking.

The role of big tech and ethics

A recurring theme throughout the conference was the growing influence of big tech in shaping the AI revolution in education. Many speakers raised concerns that current AI tools are largely developed for commercial interests rather than with educational needs in mind. This has sparked important ethical debates about the safety, effectiveness, and transparency of these technologies in classroom settings. Are these tools truly designed to support learning, or are they repurposed from broader markets without fully considering the unique demands of education?

Niklas Scholz (Saarland University) addressed these concerns in his presentation on student agency when interacting with AI. He presented his research with students and educators and argues that trust is a critical factor in the successful integration of AI into learning environments—trust not only between students and AI systems but also between educators and the tools they are expected to use. Scholz argued that without a foundation of trust, the transformative potential of generative AI in education will remain unrealised. If students or teachers perceive AI tools as opaque, unreliable, or biased, they are unlikely to embrace them fully, limiting their effectiveness in enhancing learning outcomes.

These ethical considerations are not just technical issues but also touch on questions of power, agency, and control in education. The widespread adoption of AI in classrooms could reshape traditional roles, and if driven by corporate interests, the focus may shift from enhancing education to generating profit. A number of speakers and conference attendees called for a more human-centred approach, where AI is designed and implemented with educational values—such as equity, trust, and transparency—at the forefront.

Frameworks for AI integration in education

For researchers exploring the role of generative AI in computing education, the conference provided rich insights into a variety of theoretical frameworks shaping this emerging field. Among the most frequently cited was the Technological Pedagogical Content Knowledge (TPACK) model (Mishra & Koehler, 2006), which has long served as a guide for integrating technology into teaching. TPACK helps educators consider the intersection of content knowledge, pedagogical strategies, and technological tools. Many speakers suggested that TPACK may need to be updated to account for AI’s specific capabilities, such as adaptive feedback, content generation, and personalised learning pathways. These adaptations would reflect AI’s potential to transform not just how we teach, but how students engage with and construct knowledge.

Another key model discussed was Davis’ (1989) Technology Acceptance Model (TAM), which evaluates technology adoption through two main factors: perceived usefulness and perceived ease of use. This model, although initially developed for general technology adoption, has gained traction in the AI education space. One compelling example of TAM’s application came from Clarke et al. (Canterbury Christ Church University), who introduced the Ped-AI-gogy Informed Model (PIM). PIM extends TAM by providing a structured approach for educators to move from initial apprehension toward AI integration and eventual normalisation in their teaching practices. The model specifically addresses the psychological barriers many educators face when confronted with AI, offering strategies to build confidence in using these technologies effectively.

These frameworks help move the conversation beyond mere adoption, toward a deeper understanding of how AI can be meaningfully integrated into educational contexts.

Looking ahead: the importance of AI literacy

One key question emerging from the conference was where AI literacy should sit within the existing education framework. Should it be integrated into computing curricula, or should it be part of a broader digital literacy initiative spanning multiple subject areas? Holmes gave a perspective on teacher training in addition to teaching students about AI, and that a bottom-up approach was needed alongside top-down policy for teachers to skill up. As AI continues to shape our world, ensuring that both teachers and students are equipped with the knowledge and skills to engage critically with AI is crucial.

The contributions to the field by the Raspberry Pi Computing Education Research Centre

The conference afforded our team the opportunity to present the centre’s ongoing research into both learning computing with AI but also learning about AI. Our pilot study on using large language models to explain programming error messages was led by Veronica Cucuiat and Jane Waite. Their research suggests that trust plays a critical role in whether students accept AI-generated feedback, and they proposed a new model of “AI interaction literacy” based on feedback literacy. This ongoing work will explore how educators can effectively integrate LLMs into computing classrooms.

Jane Waite presenting on AI interaction literacy in education.

Bobby Whyte also presented research on AI literacy among secondary school students. Our findings suggest that while students already have emerging conceptions about AI, many of these ideas are incomplete or naïve. This builds on our evaluation of the impact of AI educational programmes in the UK funded by Google DeepMind, and highlights the need for clear, structured AI literacy education.

The conference underscored both the promise and the challenges of integrating generative AI into education. While the technology holds immense potential to transform how we teach and learn, it also presents new ethical, practical, and pedagogical challenges. I am excited for the return of this conference (fingers crossed) and the continued research into these themes.