A Shiny Pony in Higher Ed

Jess Mitchell
OCAD University, Toronto / Inclusive Design Research Centre

There is a shiny pony trotting around our classrooms and campuses. It’s a fancy pony with a long, flowing mane and tail. It promises to do some of our work for us, un-employ others of us with its mad skills and at the same time simultaneously break down and create barriers for equity. This pony is wild, has no harness, and no self-awareness. It is fickle, trampling on our intellectual property, leaving no trace, and galloping off without clear provenance, declaration of influences, or destination. Oh, this pony also does not understand, learn, or know. Furthermore, it does not think or feel. It is an amoral pony.

Sounds like a dangerous pony to have alongside people practicing, learning, trying, failing, and growing. In reality the pony is neither fully dangerous nor fully tamed. It’s fun to play with, it can do some repetitive things quickly, and it can give us a quick start (not unlike the enormously useful and often maligned open content on the Web, including Wikipedia). The pony making all the buzz these days, in education, is Artificial Intelligence (AI).

Sometimes I try to ignore the AI shiny pony, to get a clearer picture of what AI might do. When I think of it as an amplifier, capable of exacerbating issues that already exist, I think I get some clarity. Like most new technologies, it will amplify those things people have conceptualized accurately and also those things people have oversimplified and flat out gotten wrong (this is partially generous since it will also make things up). So it will help and hurt.

Let us take the question: how will AI impact professional competencies in universities. To ask how AI will impact professional competencies in universities is simply to reveal something we have gotten wrong – that there ever could be a fixed, single, and enumerable list of competencies that someone can articulate for any given profession. We will all no doubt practice and get some ability to “engineer” good prompts. Still, competencies ought never stay the same (or else they should be abstract enough to adapt to change). Isn’t it desirable for us to have diverse approaches to our professional goals? Isn’t there more than one way to succeed? Isn’t it desirable for us to grow and adapt and evolve and iterate toward a better way (one of justice, one of efficiency, one of ethics and equity—or, how lovely to imagine all of the above)?

Institutional Impacts

AI will amplify harm—do we know what to do?

We are all using AI already, our institutions are invested already. We have been through this before too: it’s a new technology offering promises and presenting with harms. By now we should be prioritizing promises and protecting against harms. If we aren’t, then that seems to show an issue with the amplitude on the question of harm reduction in our institutional policies. We know there are harms (with AI and well beyond), do we have a policy in place to protect people from harm? If not, why?
Some AI tools discriminate based on skin colour. Joy Buolamwini’s work considers what she must do to be facially recognized in an airport, or a workplace, or in a bathroom at a soap dispenser (Buolamwini, n.d., 2020, 2023). Who on our campuses can use AI technologies?

AI will amplify poor procurement approaches.

Again, this points to procurement policies that do not protect against harm. If institutions still do not require that the technologies they procure meet basic accessibility and inclusion standards like ‘applications must interoperate with whatever adaptive tools the users prefers and must be operable by anyone,’ why? It’s the law. Does harm reduction and prevention get mentioned in your procurement policy?!

AI can amplify and yes, it can produce too.

The conundrum we are in now is that without clear policies, no one is quite sure how to use AI or if we can use AI or what kind of impact using AI is having on individual people and the environment. It has resulted in a guess at a hierarchy of use: it’s ok to use it to generate materials such as bibliographies (though it fails sometimes), but it is not desirable when conceptualizing new research. We probably should have policies that articulate the hierarchy and acceptable uses of AI. One thing that seems agreeable is that use of AI must be referenced and those who use it should be transparent in stating how they are using it. A useful tool is Leon Furze’s AI Assessment Scale that educators can use to specify the level of AI use they would like and students can self-disclose how they have used AI. What would you do if an unruly, amoral pony who wasn’t registered at the University entered your classroom and took a seat? If you don’t have an answer, then Leon Furze’s work is a good first start.

Pedagogical Implications

AI can amplify poor pedagogy and poor learning materials.

Yes, AI can in some cases create alternative modes of content, even augmenting the content that is available. But it will not solve everything for everyone. “Humans in the loop” will still be responsible for the humanizing, relational part of teaching and learning. An AI might show programmed care, but in what form? The smiles, the glances of surprise at something someone says, the notes we write to our students about their work is sometimes so highly humanized it is inimitable.

AI will amplify poor understandings and weak policies around intellectual property and open licensing.

It seems unencumbered by referencing content and it seems to ingest licensed content without hesitation. It isn’t clear what impact this will ultimately have on the commons. Regardless, AI challenges our contemporary, professional, and academic practice of knowing where content comes from. Will this jeopardize the commons? Will AI adhere to Creative Commons licensing rules? So far, it isn’t adhering to any licensing standards. Will those practices and standards change? Maybe, in certain circumstances. We will still have intellectual property, somehow. It would be a good idea for those in education to be part of that conversation. We should ask, how are AI tools’ treatment of licensed materials different from what Aaron Swartz did in the basement of MIT and was prosecuted for? And do we grant the tools more rights than we did Aaron? (Knappenberger, 2014).

AI and Ethics

AI will amplify the distance between those who have access and those who do not.

It will amplify access to various modes for those with access. So, unfortunately, no, AI will not solve our cultural biases, geographic disparities, and inequities. It will not. AI already conducts surveillance and manipulates in large scales. AI will amplify bias without moral judgement or recourse.

AI will amplify and exacerbate data breaches without respect for privacy.

It will change the ways we do research, the ways we protect data and each other. At the same time, it can be a good study assistant and aid. As Furze’s assessment shows, with some guidelines, AI can be a great way to start research, or writing, or learning. With guidelines.

This pony brings with it privacy breaches, data abuse and misuse, statistical discrimination, homogenization of outputs, surveillance, hallucinations, censors and more (Mitchell, 2024). Is the pony going to systematically disassemble the ecosystem that we’ve build around our teaching, learning, and research? The thread through all of this is human. If we can point to a core competency we all must nurture, then let it be Questioning.

The threat of doing harm should slow us down. If ever there was a moment for higher ed institutions to unequivocally focus on people and not profit or ponies, it is now. Because let’s face it… it is a wild pony with amplitude problems, let’s get back to talking about how to be humans (Students et al., 2022).

References

Buolamwini, J. (n.d.). Algorithmic Justice League. MIT Media Lab. Retrieved October 8, 2023, from https://www.media.mit.edu/projects/algorithmic-justice-league/overview/

Buolamwini, J. (2020, June 4). We Must Fight Face Surveillance to Protect Black Lives. OneZero. https://onezero.medium.com/we-must-fight-face-surveillance-to-protect-black-lives-5ffcd0b4c28a

Buolamwini, J. (2023). Unmasking AI : my mission to protect what is human in a world of machines (First edition.). Random House.

Knappenberger, B. (Director). (2014, July 1). The Internet’s Own Boy: The Story of Aaron Swartz [Video recording]. Participant. https://www.youtube.com/watch?v=9vz06QO3UkQ

Mitchell, J. (2024). Framework for Accessible and Equitable Artificial Intelligence (AI) in Education. https://openlibrary.ecampusontario.ca/item-details/

Students, C. by, University, F. and S. at O., College, M., University, B., University, T., University, N., Windsor, U. of, & Toronto-Mississauga, U. of. (2022). Learning to be Human Together. OCAD University, Learning to be Human Together Team. https://ecampusontario.pressbooks.pub/onhumanlearn/