Marching Backwards into the Future AI’s Role in the Future of Education

Highlights:

  1. The use of the Bentoism framework to consider the broader, long-term, and collective implications of AI in education.
  2. The potential double-edged nature of AI in education, both supporting and hindering equity and inclusion.
  3. The importance of community-driven ethics frameworks and pedagogical sandboxes in guiding the responsible use of AI in education.
  4. Encouraging critical AI literacy and questioning of assumptions, interests, and power structures embedded in digital tools.
  5. Emphasizing that the adoption of AI in education should reflect collective values, aspirations, and responsibilities.
  6. The call for participatory governance models involving diverse stakeholders to co-create principles for the responsible use of AI in education.

We look at the present through a rear view mirror. We march backwards into the future.

(McLuhan, 1967)

Marshall McLuhan said we often see new technology through the lens of what we already know. Today, educational institutions increasingly rely on commercial digital platforms and, as a result, these commercial interests often shape what we aim to achieve in education.

To push back on this state of affairs, we require a framework for the use of AI in education that allows us to think beyond the here-and-now towards a more future-focused orientation. This enables us to resist ‘quick wins’ that might entrench systems that mitigate against human flourishing over the longer term.

Introducing Bentoism

Yancey Strickler (2019) has introduced an approach he calls ‘Bentoism’ which resembles the Japanese boxes used for meal preparation. Bentoism asks us to think about four perspectives: what matters to me now, to us now, to future me, and to future us. For example, a school might use Bentoism to balance a student’s immediate needs with the wellbeing of the whole class, and to consider how today’s choices affect everyone’s future.

The ‘Me’ and ‘Us’ elements of the Bentoist approach can be interpreted in various ways. In this piece, ‘Me’ refers to personal, inward-looking factors such as self-reflection, while ‘Us’ focuses on social, relational aspects like interactions and connections with others.

When we apply the approach to AI’s role in the future of education, we might end up with something similar to the following:

Now Me: AI companies and consultants are keen to discuss “personalised learning” and the ability of automation to reduce teacher workload. However, this might have the side-effect of diminishing critical engagement in both learners and educators (Beetham, 2024). AI often gives answers that sound very confident — which can make students trust the answers too easily, instead of learning to question and think critically.

Now Us: While AI systems promise to personalise learning and improve efficiency, there is a growing concern that the same technologies may undermine the social fabric of learning communities. Meyer (2023) highlights the centrality of “connection over content,” arguing that meaningful learning is built on relationships and shared inquiry rather than the mere delivery of information.

Future Me: Adaptive learning tools could help people keep learning throughout their lives. But to work well, these tools often need a lot of personal data. For example, an AI system might remember what you struggled with last year and suggest new lessons. But the same data could also be used to limit and constrain your choices in the future (Williamson and Eynon, 2020). Popular AI tools have recently turned on a “memory” feature allowing them to share data between conversations (TechCrunch, 2025). Again, while convenient to users, AI platforms that collect years of learning data risk building profiles that may limit, rather than expand, opportunities for learners.

Future Us: The growing influence of commercial providers in AI-driven education presents the risk that educational priorities may be shaped by the interests of technology companies rather than the needs of learners and educators (Watters, 2021; Varsik & Vosberg, 2024). Recent research highlights that AI’s impact on equity and inclusion is double-edged (Varsik & Vosberg, 2024).

This double-edged nature of AI is worth exploring further:

It is too simplistic to say that AI is a positive or negative force for the future of education. As with any technology, it is — and can be — both. What we need are some guidelines and guardrails.

Ethical Guardrails

In the context of expanding surveillance and inequality, Facer (2011) called for schools to act as resources for fairness, democracy, and sustainable futures. This call is more urgent than ever as AI becomes embedded in educational systems worldwide. Without educator and regulatory oversight, the commercial imperatives underlying AI are likely to be foregrounded at the expense of educational outcomes.

While using AI without thinking about the risks involved can make problems worse, simply banning AI completely would stop us from finding new and better ways to use it. Instead, we need to find a middle ground, where we use AI carefully and make sure everyone’s voice is heard.

There are a number of forms such a community-driven approach could take:

In addition, ethics frameworks should address broader social justice concerns — including equity, inclusion, and respect for human rights and dignity (CADRE, 2024). This means designing AI systems that distribute benefits and burdens equitably, protect privacy and autonomy, and allow meaningful human control and recourse (Ibid.).

Countering AI Determinism

If we see AI as something that we use but cannot control, administrators, educators and students may feel powerless. But when we remember that humans design and use AI, we can make choices that reflect our own values (Williamson and Eynon, 2020).

Education systems can help students understand that technologies reflect human choices, values, and social contexts (Selwyn, 2019; Pischetola, 2021). Integrating AI literacies throughout the curriculum, as proposed by Gunder (2023) and echoed in emerging frameworks (Darvishi et al., 2024) helps encourage diverse ways of engaging with AI. This is in opposition to ‘templated’ approaches advocated by technology vendors where “frictionless” use is equated with literacy (Beetham, 2024).

We can take a critical and constructivist approach to encourage educators and students to question the assumptions, interests, and power structures embedded in digital tools (Pischetola, 2021). This approach builds communities of practice where knowledge emerges through exploration, dialogue, and iterative inquiry (Darvishi et al., 2024). Embedding critical AI literacy means students and staff are equipped to question, interpret, and challenge the information and outputs generated by AI systems, rather than passively accepting them (Luckin, 2022). Such literacies are best developed across subjects and levels, using real-world examples and scaffolded activities to ensure students can apply their skills in varied contexts (Darvishi et al., 2024).

As with any technology, integrating AI into education systems is not merely a technical matter, but fundamentally an ethical and political one. As Selwyn (2019) argues, the adoption of AI must be presented as a choice: one that reflects collective values, aspirations, and responsibilities.

Conclusion

The rear-view mirror metaphor from McLuhan remains as relevant as ever, serving as a way to remind us that our tendency to interpret new technologies through the lens of past experiences can limit our ability to recognise and respond to the transformative potential — and risks — of innovations like AI. Integrating AI into education systems is best considered using a perspective that goes beyond the here-and-now, and considers both future me and future us.

In this article, we applied Bentoism’s four-quadrant framework to help consider broader, long-term, and collective implications of technological change. If schools and policymakers focus only on quick wins, such as making the “delivery” of education more efficient, they may miss the chance to make learning more equitable, accessible, and relevant for everyone.

To address these challenges, we need participatory governance models bringing together diverse stakeholders, such as students, educators, administrators, families, and technical experts, to co-create the principles and practices that will guide the responsible use of AI. This approach not only helps ensure that AI systems reflect the values and needs of those most affected but also empowers communities to challenge and reshape technology, rather than passively accepting its direction.

Using the Bentoism framework allows us to “zoom out” and consider the needs of our present and future selves, as well as our communities now and in the future. This then leaves us better equipped to respond to those who view commercial visions of AI as an ‘inevitable’. Doing this reminds us that the future is a set of choices that are shaped by our collective values, aspirations, and willingness to engage in open, pluralistic dialogue.

Acknowledgements

Thank you to Bryan Alexander, Helen Beetham, Laura Hilliger, Ian O’Byrne, and Karen Louise Smith for conversations that helped with the development of this article. Their pieces can be found via this Linktree.

References

AI and the Future of Education