Discussing AI and education futures

Key Points:


A webinar, some short essays, and an international collaborative

How might AI impact higher education and how might academics respond to the technology? That is a core question for this newsletter. It’s also the subject of a small project I’d like to share today.

Several months ago the excellent Doug Belshaw virtually assembled a group of us for a writing project. We were to craft very short think pieces in response to a UNESCO call for materials on “AI and the Future of Education: Disruptions, Dilemmas and Directions.” Our international group - Helen Beetham, Laura Hilliger, Ian O'Byrne, Karen Louise Smith, Doug, and myself - met on Zoom, collaborated through Etherpad and Google Docs, and managed to submit a cluster of short essays. Each one took a different tack.

We’re awaiting word on what UNESCO thought of the submissions, but in the meantime we’re holding an informal webinar on the topic tomorrow. Please join us if you can make it and if there’s room. We’ve also posted versions of our think pieces for your perusal right here. A slightly edited version of mine is below the following images. As always, I’m eager to hear your thoughts.

Many thanks to colleagues Helen Beetham, Doug Belshaw, Laura Hilliger, Ian O'Byrne, and Karen Smith for conversation and kind feedback on my paper, but even more for thinking and working together.

“Several futures for AI and education”

Bryan Alexander, Georgetown University

It is important in considering the future of AI and education to realize the complexity and instability of the topic. Many pronouncements proclaim that a certain type of settlement will occur universally, be it the productive adoption of AI for learning’s benefit or the collapse of education across the board. Yet we must consider multiple levels of contingency in order to take the question seriously. It seems likely that educational use will not be singular, but plural, consisting of many different use cases.

To begin with, any discussion seeking to comprehend should consider the sheer scale of the AI-education intersection. There are literally millions of schools around the world, each situated into local and national frameworks of power to various degrees of relative autonomy. Educators, from teachers to support staff to administrators, each similarly have some degrees of decision-making. Multiple third parties impose some degree of influence over this vast ecosystem: publishers, educational technology providers, religious institutions, military organizations, professional certifiers. And all of these situations of control and possibility change, sometimes frequently. Into this complex reality comes AI, with its own ranks of complexity. There are multiple large companies and start-ups, competing governments, open source projects, standards, types of applications, and huge sums of investment all interacting at high speed. New categories of AI, such as video generators and personal agents, are struggling to be born as of this writing. Members of this ecosystem too are subject to transformation.

That degree of transformational possibility is potentially larger than it seems, when it comes to generative AI. There is a fragility to the technology at the present, or several forms of brittleness. To begin with, there does not appear to be a reliable business model for large language models; the established digital economics of advertising and data monetization have not panned out for AI so far. Many of them are extremely expensive to train, develop, and maintain, yet none has generated a revenue stream which breaks even, much less turns a profit. The investors who have sunk billions of dollars and Euros into LLMs are eager to justify their investments. As Carlotta Perez has argued, those financial supporters are likely to demand enterprise changes as a result.[1] This may happen quickly and/or extensively. Meanwhile open source projects, such as Meta’s Llama, point to different economic models and technological structures, while Chinese application Deepseek claims to have achieved high quality levels at a fraction of OpenAI’s price.

Moreover, other forces press on the structure and very existence of generative AI. Government regulations have been tentative so far in Europe and the United States, with the EU’s AI Act still being iterated after legislative approval and America shifting policy between two administrations. China alone seems to have a firm central AI policy, one of national encouragement and also control. Any of these powers can reshape how AI works, is funded, and how we access it. To pick one policy area, attitudes towards AI’s climate impact are quite diverse, with some supporting the tools which can help us mitigate and adapt to global warming, while others call for their restriction given their large electricity and water usage. Meanwhile, cultural forces could determine AI’s fate. As of this writing pro-AI enthusiasm has been matched by anti-AI critique, with many artists objecting to the technology, joined by some consumers. The critique is now well established and wide-ranging, including a strong political charge. In other words, different cultures might divide on how they apprehend AI, with different groups adopting or shunning it. Those artists have teeth as well, in legal terms, as a significant number have filed major copyright lawsuits against big AI enterprises. Any of those suits could involve modifying, suspending, or terminating LLMs. To repeat: generative AI might look like a world-shaker, but it is also fragile. This radically shapes how educators might decide to use it.

That cultural dimension includes a subtle yet powerful psychological one. Any technology can adjust the mental makeup of users, and AI is starting to present such a shift now in the form of companionbots and anthropomorphization. A significant number of users treat AI chatbots as akin to human beings, just better than most. There are websites which present bots as friends, mentors, romantic partners, even lovers, a la the movie Her (2013). We might view these emotional attachments as bizarre or pathetic, but they nonetheless occur, and reminds us of what some call a global loneliness pandemic. Services like Replika tell us generative AI is providing a psychologically important service. This then influences how educators think about the technology, as well as how business leaders and policymakers view it.

Given these forces of uncertainty, we can start to examine how schools have responded to this rapidly developing, complicated, challenging, and fragile technology. As with the broader society we have seen educators react with both enthusiasm and apprehension. Rogers’ diffusion model seems to be in play, with early adopters racing ahead to try and build upon the novelty, while opponents resist, and the struggle opens up for the middle majority who must determine their use based on perceptions of incremental advantage. There have been calls for universal adoption and individual resistance. So far this echoes how academics have initially reacted to many other major technologies, from the World Wide Web to mobile devices.[2]

Yet there are structural problems at the level of schools and larger organizations. Many educators express the desire for professional development concerning AI, a demand that has not yet been met. Information technology departments grapple with AI on several fronts, and there does not seem to be a consensus approach as of yet: how to safeguard user privacy; how to afford enormously expensive enterprise licenses; how to support an academic community in making the best use of LLMs; how to pick the right service when so many are in play and evolving quickly; whether or not to follow the open source route. Schools are also subject to state policies which, as noted earlier, are still evolving.

These challenges have not stopped academic AI use. There are a number of faculty who use LLMs to help in their teaching, from producing materials to grading. Some staff use AI in administrative work. And, of course, some proportion of the student body uses the technology in their learning. The precise nature and extent of the latter is difficult to determine, due to a lack of good survey research on the topic (think of the challenges of self-reporting), but we can share some results based on instructors’ experiences. Students use AI to produce content: papers, reports, presentation slides, images, and code. They use it to summarize readings. And they sometimes treat AI as companions, as we noted above. Indeed, a UNESCO paper offers a typology of human-like interactions, from Socratic opponent to study buddy and mentor. Meanwhile teachers struggle to control AI-enabled cheating, as no technological solution has attained a basic success rate, and many pedagogical solutions (hand writing, oral presentations) have their own challenges.

Given this complicated reality, insofar as we can grasp it, we can hazard several futures models based on extrapolation of the recent past and the history of technological innovations in education. Recall that these are based on stacks of contingencies.

  1. A divided educational world. Some teachers, staff, and administrators encourage AI use while others forbid it. Individual schools, campuses, and associations experience splits within, which might become cordial or contentious. At a larger level we will see academic units compete with each other based on their AI strategy. Local, regional, or national politics might shape these attitudes, as, for example, schools align with state strategies. The very wealthiest institutions might peel away from the rest as they have the resources to fund more ambitious AI projects.
  2. The appearance of an AI intermediary layer. Some students, staff, and faculty use AI as an agent or interface with the world, using it for functions as diverse as web search, concert ticket purchases, family emails… and writing research, conducting literature reviews, purchasing lab supplies, and grading. People interact with each other in the educational space through a digital layer composed of multiple artificial intelligences. Cheating students seek to get their bot-content past instructors wielding AI-powered countermeasures.
  3. Education decays. As more people perceive they can get educational benefits from AI they gradually disengage from academic institutions. When easy prompting or low-cost mentorbots offer an apparently decent educational experience, why should one pay serious money for classes, either directly or through taxes? If people see AI as, on average, presenting a better experience than teachers provide, why should we have schools at all? Educational institutions’ reputations sink.
  4. Surveillance dystopia. Authorities use AI to increase their apprehension of residents’ lives in detail, and that includes the work of students at school. Fears of student cheating lead to ever more powerful and invasive proctoring tools, supervised by AI. Personalized learning becomes a means of micromanaging individual learners’ lives.
  5. The 21st century AI winter. A combination of forces shatter the AI giants. Copyright lawsuits, state policies, financial failure, and cultural dislike bring down each major AI project. The global economy takes a recession-level hit. Schools lose resources they devoted to the technology.
  6. Academia becomes stronger. People perceive generative AI as unreliable and unstable, producing too many hallucinations or mirages. Educators, in contrast, appear to be more reliable. Schools’ reputations grow as trusted sources. Additionally, as AI changes the world, academics appear as guides to the transformation.

Each of these possibilities depends on certain combinations of attitudes, perceptions, policies, and collective action. None may describe the total picture, as we began this essay saying. Indeed, each may come to pass in different parts of the world at different times, given the scale and diversity of human civilization and worldwide schooling. We could see some institutions divided while others are not, one government installing a surveillance dystopia while a neighbor refuses it, one nation’s academic system declining in the face of AI while another improves its reputation. Education’s adoption of AI is and will likely be uneven. To use the famous words of William Gibson, AI in education is already here, just unevenly distributed.

AI and the Future of Education


  1. Carlota Perez, Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages. Northampton, Massachusetts: Edward Elgar, 2003. ↩︎

  2. Cf the classic model from Everett M. Rogers, Diffusion of Innovations, 5th Edition. New York: Free Press, 2003. ↩︎