Determining what students need to know about artificial intelligence (AI) is an ongoing challenge for researchers and educators. To enable students to be critical consumers and creators of AI technologies, they need to develop an accurate understanding of AI, how it is trained, where it is used, its affordances and tradeoffs, and its (potential) impact on daily life. Understanding what students already know can enable educators and resource developers to build on students’ existing understanding, avoid misconceptions, and build an accurate mental model of AI systems.
Given the prevalence of AI technologies in daily life, it is reasonable to assume that students hold many ideas about AI. These represent “emerging conceptions” or initial ideas formed before any formal instruction. These may be ‘accurate’—they conform to a canonical understanding of a topic—or they may be ‘naive’—they contain inaccurate assumptions about a phenomenon Previous research has found that students’ naive conceptions include a belief that AI is sentient and has emotions. Identifying these conceptions is essential in determining what and how we teach about AI.
Study about students’ conceptions of AI
In a recent study presented at the United Kingdom and Ireland Computing Education Research (UKICER) conference, we surveyed students who took part in various AI/AI-related education programmes on their perceptions and understanding of AI. Through a single open-ended question (“In the box below, write down what you think AI is”), we analysed students’ conceptions of AI. We sent the survey to multiple organisations to distribute to their programme participants. In total, we collected 692 responses from students aged 11-18. After removing 218 incomplete or irrelevant responses, 474 remained for analysis.
We coded students’ responses on whether they constituted an accurate or naive conception, drawing on existing literature to support our reasoning. We then used SEAME—a popular framework for understanding AI at multiple levels—to determine where students held conceptions; that is, we categorised accurate and naive conceptions into four categories: conceptions on the socio-ethical implications of AI, conceptions on where AI is used, conceptions on how AI models are trained, and conceptions on the underlying engines/algorithms that drive AI systems. Ultimately, our analysis below reveals 660 accurate conceptions and 109 naive conceptions. In the following two sections, we present a snapshot of these conceptions; the full list is available in the paper here.
Accurate conceptions
Students mostly held accurate conceptions of where AI is embedded (e.g. chatbots, voice assistants), robots, and self-driving cars (85%). They accurately described these tools as mimicking human behaviours (e.g., chatbots simulating conversations). They also held accurate conceptions about the use of AI in automating tasks, saving time (10%) and its potential in addressing societal challenges (e.g. climate change) (7%). In terms of models, some students understood the need for large datasets in training AI systems (9%) and using this training data to generate outputs (10%). Few students described the underlying machine learning algorithms (7%) and how these are modelled on (but don’t replicate) human cognitive processes.
SEAME | Accurate conception of AI | Example | n | % |
SE | Assists humans in performing tasks (e.g. fewer errors, automation) | “[AI] helps humans to complete their jobs in more efficient methods.” | 48 | 10% |
Can help solve societal issues (e.g. climate change, research) | “AI [will] also will create cures for diseases such as cancer.” | 32 | 7% | |
A | Simulates human behaviour(s) (e.g. learning, reasoning, talking, interacting) | “Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems.” | 94 | 20% |
Used in robotics/self-driving cars | “[AI is currently] used as a chatbot or image maker or used in cars like Teslas.” | 90 | 19% | |
M | Data-driven (e.g. trained on large datasets) | “A program trained on an enormous dataset […] to complete tasks.” | 41 | 9% |
E | Uses machine learning algorithms (i.e. artificial neural networks, decision trees) | “Reinforcement learning to complete a task that someone wants it to do for example sorting things into groups (classifying).” | 32 | 7% |
Naive conceptions
Students held comparatively fewer naive conceptions, such as the notion that AI is embedded in all technologies (11%). For some, they felt that AI systems demonstrated human qualities, such as sentience or expressing emotions (9%).
SEAME | Naive conception of AI | Example | n | % |
A | Embedded in all technologies (e.g. computers, internet) | “An internet based thingy to help the internet.” | 52 | 11% |
M | Looks up information in database | “A network that pulls information from a database and uses it to perform a task” | 7 | 1% |
E | Functions like a human (e.g. thinking, feeling) | “A non-human sort of sentient with a mind of their own.” | 43 | 9% |
Our findings provide a snapshot of students’ current conceptions of AI. Like the underlying technologies that power AI, these conceptions will, too, develop over time. Though these results are encouraging, they indicate some underlying inaccurate assumptions about AI. To support an accurate understanding of these emerging technologies, educators and resource developers should aim to develop resources that address these naive conceptions and provide age-appropriate and accurate explanations of how AI works, what it (currently) can and can’t do, and how it differs from human intelligence. By supporting students to develop a deeper understanding of AI, students can be better prepared to engage with these technologies critically and responsibly.
Further reading
This project was part of a wider study funded by Google DeepMind whereby the Raspberry Pi Computing Education Research Centre conducted an evaluation of a range of AI and STEM school engagement programmes. If you are interested in learning more about this research, you can read the paper here. As part of the same project, we have also written about how we measured teachers’ self-efficacy and career awareness when teaching about AI and co-authored a paper with the team at the Raspberry Pi Foundation to understand the impact of Experience AI—an introductory curriculum for artificial intelligence and machine learning—on KS3 students’ conceptions and perceptions of AI.