Marching Backwards into the Future AI’s Role in the Future of Education
Highlights:
- The use of the Bentoism framework to consider the broader, long-term, and collective implications of AI in education.
- The potential double-edged nature of AI in education, both supporting and hindering equity and inclusion.
- The importance of community-driven ethics frameworks and pedagogical sandboxes in guiding the responsible use of AI in education.
- Encouraging critical AI literacy and questioning of assumptions, interests, and power structures embedded in digital tools.
- Emphasizing that the adoption of AI in education should reflect collective values, aspirations, and responsibilities.
- The call for participatory governance models involving diverse stakeholders to co-create principles for the responsible use of AI in education.
We look at the present through a rear view mirror. We march backwards into the future.
(McLuhan, 1967)
Marshall McLuhan said we often see new technology through the lens of what we already know. Today, educational institutions increasingly rely on commercial digital platforms and, as a result, these commercial interests often shape what we aim to achieve in education.
To push back on this state of affairs, we require a framework for the use of AI in education that allows us to think beyond the here-and-now towards a more future-focused orientation. This enables us to resist ‘quick wins’ that might entrench systems that mitigate against human flourishing over the longer term.
Introducing Bentoism
Yancey Strickler (2019) has introduced an approach he calls ‘Bentoism’ which resembles the Japanese boxes used for meal preparation. Bentoism asks us to think about four perspectives: what matters to me now, to us now, to future me, and to future us. For example, a school might use Bentoism to balance a student’s immediate needs with the wellbeing of the whole class, and to consider how today’s choices affect everyone’s future.
The ‘Me’ and ‘Us’ elements of the Bentoist approach can be interpreted in various ways. In this piece, ‘Me’ refers to personal, inward-looking factors such as self-reflection, while ‘Us’ focuses on social, relational aspects like interactions and connections with others.
When we apply the approach to AI’s role in the future of education, we might end up with something similar to the following:
Now Me: AI companies and consultants are keen to discuss “personalised learning” and the ability of automation to reduce teacher workload. However, this might have the side-effect of diminishing critical engagement in both learners and educators (Beetham, 2024). AI often gives answers that sound very confident — which can make students trust the answers too easily, instead of learning to question and think critically.
Now Us: While AI systems promise to personalise learning and improve efficiency, there is a growing concern that the same technologies may undermine the social fabric of learning communities. Meyer (2023) highlights the centrality of “connection over content,” arguing that meaningful learning is built on relationships and shared inquiry rather than the mere delivery of information.
Future Me: Adaptive learning tools could help people keep learning throughout their lives. But to work well, these tools often need a lot of personal data. For example, an AI system might remember what you struggled with last year and suggest new lessons. But the same data could also be used to limit and constrain your choices in the future (Williamson and Eynon, 2020). Popular AI tools have recently turned on a “memory” feature allowing them to share data between conversations (TechCrunch, 2025). Again, while convenient to users, AI platforms that collect years of learning data risk building profiles that may limit, rather than expand, opportunities for learners.
Future Us: The growing influence of commercial providers in AI-driven education presents the risk that educational priorities may be shaped by the interests of technology companies rather than the needs of learners and educators (Watters, 2021; Varsik & Vosberg, 2024). Recent research highlights that AI’s impact on equity and inclusion is double-edged (Varsik & Vosberg, 2024).
This double-edged nature of AI is worth exploring further:
- On one hand, AI can adapt learning to individual needs, support under-resourced students, and help educators identify and address systemic inequalities (Klimova & Pikhart, 2025). For example, AI-driven Universal Design for Learning (UDL) approaches can make curricula more accessible for students with disabilities or those from diverse backgrounds (Klimova & Pikhart, 2025).
- On the other hand, access to AI-powered tools is not evenly distributed, with students from lower-income households and rural areas often facing barriers to digital equity (IEEE, 2023). AI systems can also reflect and amplify existing biases, especially when trained on data mirroring societal inequalities (Varsik & Vosberg, 2024). As such, these technologies may exacerbate opportunity gaps and create new forms of exclusion, particularly for marginalised groups (Klimova & Pikhart, 2025).
It is too simplistic to say that AI is a positive or negative force for the future of education. As with any technology, it is — and can be — both. What we need are some guidelines and guardrails.
Ethical Guardrails
In the context of expanding surveillance and inequality, Facer (2011) called for schools to act as resources for fairness, democracy, and sustainable futures. This call is more urgent than ever as AI becomes embedded in educational systems worldwide. Without educator and regulatory oversight, the commercial imperatives underlying AI are likely to be foregrounded at the expense of educational outcomes.
While using AI without thinking about the risks involved can make problems worse, simply banning AI completely would stop us from finding new and better ways to use it. Instead, we need to find a middle ground, where we use AI carefully and make sure everyone’s voice is heard.
There are a number of forms such a community-driven approach could take:
- Community-crafted ethics frameworks show how communities can work together to create clear rules for using AI in education (Institute for Ethical AI in Education, 2025).
- Pedagogical sandboxes provide safe spaces where students and teachers can try out AI tools without fear of serious negative consequences (BCcampus, 2023). These environments can support collaborative experimentation and ongoing refinement, and give educators the opportunity to explore both the educational benefits and potential risks of AI in a low-pressure context.
- Reality checks on AI claims are important as is crucial to scrutinise systems that can be biased, produce misinformation, or make mistakes. Educational communities should establish clear, reliable, and well-documented processes to verify the claims put forward by AI vendors.
In addition, ethics frameworks should address broader social justice concerns — including equity, inclusion, and respect for human rights and dignity (CADRE, 2024). This means designing AI systems that distribute benefits and burdens equitably, protect privacy and autonomy, and allow meaningful human control and recourse (Ibid.).
Countering AI Determinism
If we see AI as something that we use but cannot control, administrators, educators and students may feel powerless. But when we remember that humans design and use AI, we can make choices that reflect our own values (Williamson and Eynon, 2020).
Education systems can help students understand that technologies reflect human choices, values, and social contexts (Selwyn, 2019; Pischetola, 2021). Integrating AI literacies throughout the curriculum, as proposed by Gunder (2023) and echoed in emerging frameworks (Darvishi et al., 2024) helps encourage diverse ways of engaging with AI. This is in opposition to ‘templated’ approaches advocated by technology vendors where “frictionless” use is equated with literacy (Beetham, 2024).
We can take a critical and constructivist approach to encourage educators and students to question the assumptions, interests, and power structures embedded in digital tools (Pischetola, 2021). This approach builds communities of practice where knowledge emerges through exploration, dialogue, and iterative inquiry (Darvishi et al., 2024). Embedding critical AI literacy means students and staff are equipped to question, interpret, and challenge the information and outputs generated by AI systems, rather than passively accepting them (Luckin, 2022). Such literacies are best developed across subjects and levels, using real-world examples and scaffolded activities to ensure students can apply their skills in varied contexts (Darvishi et al., 2024).
As with any technology, integrating AI into education systems is not merely a technical matter, but fundamentally an ethical and political one. As Selwyn (2019) argues, the adoption of AI must be presented as a choice: one that reflects collective values, aspirations, and responsibilities.
Conclusion
The rear-view mirror metaphor from McLuhan remains as relevant as ever, serving as a way to remind us that our tendency to interpret new technologies through the lens of past experiences can limit our ability to recognise and respond to the transformative potential — and risks — of innovations like AI. Integrating AI into education systems is best considered using a perspective that goes beyond the here-and-now, and considers both future me and future us.
In this article, we applied Bentoism’s four-quadrant framework to help consider broader, long-term, and collective implications of technological change. If schools and policymakers focus only on quick wins, such as making the “delivery” of education more efficient, they may miss the chance to make learning more equitable, accessible, and relevant for everyone.
To address these challenges, we need participatory governance models bringing together diverse stakeholders, such as students, educators, administrators, families, and technical experts, to co-create the principles and practices that will guide the responsible use of AI. This approach not only helps ensure that AI systems reflect the values and needs of those most affected but also empowers communities to challenge and reshape technology, rather than passively accepting its direction.
Using the Bentoism framework allows us to “zoom out” and consider the needs of our present and future selves, as well as our communities now and in the future. This then leaves us better equipped to respond to those who view commercial visions of AI as an ‘inevitable’. Doing this reminds us that the future is a set of choices that are shaped by our collective values, aspirations, and willingness to engage in open, pluralistic dialogue.
Acknowledgements
Thank you to Bryan Alexander, Helen Beetham, Laura Hilliger, Ian O’Byrne, and Karen Louise Smith for conversations that helped with the development of this article. Their pieces can be found via this Linktree.
References
- BCcampus (2023) ‘Sandbox Approach to Empowering Learners’ Aspirations’. Available at: https://bccampus.ca/2023/07/10/sandbox-approach-to-empowering-learners-aspirations/ (Accessed: 22 April 2025).
- Beetham, H. (2024) ‘What price your “AI-ready” graduates?’, imperfect offerings, 7 August. Available at: https://helenbeetham.substack.com/p/what-price-your-ai-ready-graduates (Accessed: 23 April 2025).
- CADRE (2024) ‘Toward Ethical and Just AI in Education Research’. Available at: https://cadrek12.org/sites/default/files/2024-06/CADRE-Brief-AI-Ethics-2024.pdf (Accessed: 22 April 2025).
- Darvishi, S., Hauck, M., & Open University (2024) OU Critical AI Literacy Framework 2025. Available at: https://www.open.ac.uk/blogs/learning-design/wp-content/uploads/2025/01/OU-Critical-AI-Literacy-framework-2025-external-sharing.pdf (Accessed: 22 April 2025).
- Debord, G. (1994) The society of the spectacle. Translated by D. Nicholson-Smith. New York: Zone Books. Available at: https://www.marxists.org/reference/archive/debord/society.htm (Accessed: 22 April 2025).
- Facer, K. (2011) Learning futures: Education, technology and social change. London: Routledge.
- Gunder, A. (2023) ‘Dimensions of AI Literacies’. OpenEd Culture. Available at: https://openedculture.org/projects/dimensions-of-ai-literacies/ (Accessed: 22 April 2025).
- IEEE Connecting the Unconnected (2023) ‘Digital Equity in Schools’. Available at: https://ctu.ieee.org/blog/2023/02/22/digital-equity-in-schools/ (Accessed: 22 April 2025).
- Institute for Ethical AI in Education (2025) ‘The Ethical Framework for AI in Education’. Available at: https://www.ai-in-education.co.uk/resources/the-institute-for-ethical-ai-in-education-the-ethical-framework-for-ai-in-education (Accessed: 22 April 2025).
- Klimova, B. and Pikhart, M. (2025) ‘Exploring the effects of artificial intelligence on student and academic well-being in higher education: a mini-review’, Frontiers in Psychology, 16, 1498132. doi: 10.3389/fpsyg.2025.1498132. Available at: https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2025.1498132/full (Accessed: 22 April 2025).
- Luckin, R. (2022) ‘How can we make AI education a priority without using scare tactics?’ In: AI, data science, and young people. Understanding computing education (Vol 3). Proceedings of the Raspberry Pi Foundation Research Seminars. Available at: https://www.raspberrypi.org/app/uploads/2022/08/How-can-we-make-AI-education-a-priority-without-using-scare-tactics_-Luckin-R-2022.pdf (Accessed: 22 April 2025).
- McLuhan, M. (1967) The medium is the massage: An inventory of effects. London: Penguin.
- Meyer, D. (2023) ‘The AI Rapture Ain’t Nigh: What To Do When You Stop Waiting’. YouTube. Available at: https://www.youtube.com/watch?v=pUb9RBZv7Po&t=85s (Accessed: 22 April 2025).
- Pischetola, M. (2021) ‘Re-imagining Digital Technology in Education through Critical and Neo-materialist Insights’, Digital Education Review, 40, pp. 153–166. Available at: https://files.eric.ed.gov/fulltext/EJ1328591.pdf (Accessed: 22 April 2025).
- Selwyn, N. (2019) Should robots replace teachers? AI and the future of education. Cambridge: Polity Press.
- Strickler, Y. (2019) Bentoism. Available at: https://www.ystrickler.com/bentoism/ (Accessed: 22 April 2025).
- TechCrunch (2025) ‘xAI adds a “memory” feature to Grok’, 16 April. Available at: https://techcrunch.com/2025/04/16/xai-adds-a-memory-feature-to-grok/ (Accessed: 22 April 2025).
- Varsik, S. and L. Vosberg (2024), “The potential impact of Artificial Intelligence on equity and inclusion in education”, OECD Artificial Intelligence Papers, No. 23, OECD Publishing, Paris, https://doi.org/10.1787/15df715b-en.
- Watters, A. (2021) Teaching machines: The history of personalised learning. Cambridge, MA: MIT Press.
- Williamson, B. and Eynon, R. (2020) ‘Historical threads, missing links, and future directions in AI in education’, Learning, Media and Technology, 45(3), pp. 223-235. Available at: https://doi.org/10.1080/17439884.2020.1798995 (Accessed: 22 April 2025).