Thursday, February 13, 2025

Swindell et al., Against Automated Education

 


Swindell, Andrew; Luke Greeley, Antony Farag, and Bailey Verdone (2024), “Against Artificial Education: Towards an Ethical Framework for Generative Artificial Intelligence (AI) Use in Education” Online Learning 28(2), 7-27.


Summary:

This interesting article argues for an ethical framework drawing on the work of Gunther Anders, Michel Foucault, Paolo Freire, Benjamin Bloom (actually, the Revised Bloom’s Taxonomy), and Hannah Arendt. In the event, Anders, Foucault, and Freire are discussed briefly for broader ethical context, but the main focus of the article is the addition of an ethical dimension to Bloom’s Taxonomy using Arendt’s hierarchy of labor, work, and action.

They apply this to the actually existing use of AI by imagining this, frankly, quite likely scenario:

Let’s consider an example of how AI might be used with current GPT technology in a classroom. A journalist, under pressure to produce more consumable content for its struggling publication, uses a GPT to write a story about the benefits and costs of electrical vehicle production and use. A teacher, excited by the labor-saving allure of an AI teaching assistant product called Brisk, uses the software extension to read the news story about electric vehicles and design a 60-minute lesson plan for their students, complete with learning goals, discussion prompts, a presentation activity, and summary quiz about the reading. The students, given carte blanche to use their school-provided Chromebooks, “read” the story using an AI platform like Perplexity, which provides summary analysis and key takeaways for them to use in their discussion and respond to the quiz. Simultaneously, they use Microsoft’s AI image generator to create a slide deck for the class to graphically represent their group’s ideas. The teacher completes the assessment cycle by having their AI assistant grade the quizzes, provide feedback to the students, and input their scores into a learning management system. (Swindell et al., 2024: 17).

[Brisk is a classic example of the stark cynicism of our current use of GAI, allowing “instructors” (the term loses meaning in this context) to automatically generate “feedback” on student essays, which you (the instructor) are then encouraged to “personalize” and present to the students as “from you.”]

The authors’ critique of such a situation:

In this scenario, the AI engages in activities of labor and consumption, while all of the parties involved advance nothing of lasting significance, and if debate or critical reflection arise amongst students it is an incidental, rather than planned, outcome of the AI-prescribed lesson. Indeed, the Brisk teaching assistant might be well programmed to incorporate into the lesson features of the RBT such as understanding, evaluating, and creating activities; but unless a human being in this process is attuned to helping learners act in the world and make it a place, using Arendt’s (1963/2006) words, “fit for human habitation,” ... the most common educational experience might become, ironically, ones in which humans are unnecessary. (ibid., 18)

They go on to propose a “Framework for Ethical AI Use in Education,” in the form of a graphic inputting insights from each of the five philosophies they are drawing on. They apply this framework in two examples, which are, unfortunately, not particularly satisfying. They begin with a list of “guiding questions” for lesson design using AI:

1. In what ways are our historical, technological, social contexts shaping how we think and act; what activity or experience can shock learners into appreciating their contingency?

2. Will the technologies we are going to use advance humanizing ends? In what ways can the technology enhance or harm the co-creation of knowledge?

3. How can we design learning activities that have benefits beyond their own sake; how are the learning activities helping students to act in the world?

4. In what ways can AI reduce the burdens of teaching and learning labor while increasing the capacity to act in the world? (ibid., 22)

[The first two questions show the influence of Foucault and the rest; the last two are primarily informed by Arendt.]

Their first proposed exercise involves a research project in which students seek to learn about their local “political landscape.” AI is used to conduct research on who the local elected officials are, what the local issues are, and what are the important fora for discussion and debate. Students then form their own positions using this knowledge: the idea is that AI performs the “labor” (Arendt’s lowest category), leaving humans free to focus on “action” (Arendt’s highest category.

However, having done exercises like this in the past without AI, this just seems like so many attempts to rationalize an “ethical” or “harmless” use of AI – namely, AI is inserted as an extra element where it is not actually needed. Local political entities, candidates, electoral bodies, and so on, have websites with all this information – it is not hard to find. Using a generative AI search tool only introduces the likelihood of errors, along with the dangerous habit of taking AI as a reliable source of information. At best, AI could be asked what websites contain this information, and then the information looked up on those websites (with the added hope that the list is correct, of course). What is more difficult is not the “labor” of looking up information, but the process of reading through debates, articles, and so on to try and evaluate and formulate issues and positions, and it is this that students are likely to use AI for – against the recommendations of Swindell et al, since after all this involves higher-level Bloom’s and corresponds with “action,” which is supposed to be left to humans.

In their second example,

students are tasked with researching a topic of their choosing both to learn about it and apply this knowledge to their own context. To facilitate this endeavor, AI acts as an agent of Socratic dialogue and questioning for the student, helping students generate research idea topics that will be specifically catered towards student interests. AI will be equipped to ask students questions regarding their level of interests and commitment, suggest other topics of potential interest based on specific student response in addition to refine students’ thinking regarding logical sequencing of topic selection and eventually argument. This personalized approach allows them to analyze how these topics manifest in their own lives and communities, gaining valuable insights. (ibid., 24)

Again, why is AI required to engage in Socratic dialogue? First of all, isn’t this the instructor’s job? (And one of Brisk’s more cynical applications is just such an automated “feedback” generator). But more deeply, isn’t this an opportunity for students to engage in Socratic interaction and mutual critique with each other? After all, the authors have been citing Freire on conscientization and the need to allow students to develop control over their learning process. The instructor could easily model Socratic questioning in class, and give students example questions and topics to guide them in developing their own practice. Delegating this to AI is an opporyunity lost.

Thus, we have yet another attempt at reasoning out an ethical use for AI in the classroom, which fails to provide any good reason for actually using AI in the first place. Seeing as the primary use of AI today is 1) to avoid having to do any actual work or difficult thinking, and 2) to avoid interacting with people, it is hard to see how a “humanist” or ethical use can gain much traction, until this situation – and the underlying causes, pre-existing the development of generative AI – are addressed.

Another limitation of the model could be the reliance on Arendt’s hierarchy of labor-work-action, which has been reasonably criticized as reproducing an arbitrary, classist distinction (cf. also Sennett 1990). It is not true that we don’t learn or gain from anything classed in this model as “labor,” or that there can in fact be a clear line drawn between the actual, complex, productive activities which Arendt has delineated into these three a priori types. More to the point, it is not the type of work, but the social context, aka the relations of production, which render some kinds of work more meaningless or alienating than others. Likewise, it is not the mere fact of automation that is problematic, but how that automation is deployed, to what ends, and in whose ultimate interests. The authors make some nods to this political-economic context (via their discussion of Freire, Foucault, etc.) but the proposed ethical framework does not much reflect this.

Beyond this, the insistence on a “humanist” framing could be a limitation (Arendt in fact called herself an “anti-humanist”). The result is yet another call to keep “humans in the loop,” as masters, rather than servants, of the technology—as if it were the relations between humans and machines, rather than those between humans and other humans, that was ultimately at stake.

What difference might a post-humanist view have on the issue? ANT, for example, could have been brought in to consider the human subject as a historically and contextually created “figure” in a larger more-than-human assemblage, and the dissolution of this figure, with the supplanting of the disciplinary society with the control society, occurring, in Foucault’s words, like the erasure of “a face drawn in sand at the edge of the sea” (Foucault 1970: 387).



Foucault, Michel (1970) The Order of Things. Vintage Books, New York.

Sennet, Richard (1990) The Conscience of the Eye: The Design and Social Life of Cities. Alfred A. Knopf, New York.




No comments:

Post a Comment