PROJECT UPDATE #9

by | 1st May 2021 | Ethics, Fair, Project Update, Regulation

I’m not trying to be rude, but my parents always taught me to speak up if I think something is bogus.

Pip Harry, Because of You

 

When I was 24 my boyfriend proposed to me. We were at one of our favourite outdoor raves in Australia (Earthcore). A full moon. Stars twinkled above us. The weather was perfect. He and I had been together for several years and he felt that it was time to settle down and make things legit. I held my breath as he whispered the words: “Will you marry me?”

While I was happy the way we were, his marriage proposal rocked my world. Those four words shocked me enough to move me to sell what little I owned, cut my long locks off, and buy a backpack, a Swiss army knife and one around-the-world plane ticket in response.  Marking the occasion, I also got my first tattoo to remind me later in life, like now, of my adventures. A couple of weeks later, I was leaving on a jet plane to start my solo backpacking journey in Tokyo, Japan.

During my travels, I visited many places, including the most spectacular areas sometimes missed by the tourist masses. I stayed in a treehouse above a poopy chicken coop in the wilds of Turkey, and I meditated under a tree in Nara (Japan) while surrounded by wild deer and cheeky monkeys. I even rode a spitting fiend of a camel along the sands of Wadi Mousa with Bedouins guiding my way.

Now there were definitely times when I was scared to be a single Creole female traveling alone in far-off countries, but somehow I managed to stay safe, always making friends along the way. To avoid drawing attention to myself, I would often dress like a boy—cargo pants, long-sleeved shirts, sneakers, and my trusty bucket-hat. That said, thinking back to some of the decisions I made at the time, I count myself lucky to be here at all to tell you my story.

Traveling to environments vastly different to what I am accustomed opened my eyes and my heart in ways I could have never imagined. I know this probably sounds cliché but it’s true. It was only when I went to Israel, for example, and saw how Palestinian people were being so unfairly treated, how much they were suffering, that I started to think really seriously about issues of social justice… issues of fairness and equity, issues of displacement, oppression, and power; beyond issues related to my own experiences growing up as “one of those coloured kids” which were challenging enough.

I remember taking a picture of a banner a group of activists had pinned to the side of a building in Jerusalem in protest of the Israeli incursions into Palestinian land (Image 2). This was all before the wall as we know it now was built. After I snapped that shot I emerged from my viewfinder to be greeted by a rifle pointed at my face. An Israeli soldier, younger than me, said rather stoically: “No pictures.” I knew better than to argue with the barrel of a gun, so I uttered my apologies and moved along, picture secured safely in camera for later viewing.

 

Image 2: Israel / Palestine many moons ago

I think it was that particular experience that moved me to develop an interest in the political and socio-cultural heritage of space and place. During my time backpacking through Asia, Africa, Europe, and North America I learned more about the many faces of injustice. Up until that point in my life I was aware of what was happening around me in the world, politically, economically, socially… but I had not actually come face-to-face with those who are truly suffering. Presence with the other allowed me to feel human connection, feel my own humanity, and, consequently, compassion. I also felt anger that what I had witnessed during my travels was allowed to happen and continues to happen. In hindsight, it was the suffering I witnessed while exploring the world that quite firmly rooted me within a particular set of values and principles that guide my life at home and at work, with praxis and activism as central.

Right about now my mother would likely remind me that ultimately we are all suffering. Indeed, it would be foolish of me to not acknowledge that we all suffer, rich or poor, black or white—in different ways whether it’s by being unwell, or watching the passing of a loved one, or suffering because of financial hardship, and so forth. Nevertheless, there are injustices in the world that ensure some people suffer unnecessarily, or suffer more than others for the benefit of those same others. How is this fair? To use a general example, despite the wealth of some nations and some corporations – wealth which is concentrated further into some ridiculously small percentage of a population – a significant number of people continue to live in poverty. A significant number of people have limited access to food and water, to shelter, power, health care, adequate and equitable sanitation, and little or no access to education. The most affected are usually girls and women. What are the moral implications of situations such as these? How is such disparity in the world fair? What does fairness even mean?

 

What is fairness?

Around the time of my backpacking trip I was reading Practical Ethics by the esteemed philosopher Peter Singer, who also wrote the influential essay Famine, Affluence and Morality in 1972. In his essay he discussed our moral obligation to the poor. He argued that the way people in relatively affluent countries react to humanitarian crises cannot be justified. The same, I would say, still holds. For instance, take recent cutbacks in funding as announced by the UK government in regards to both research and foreign aid (e.g., girls’ education).

Singer called for a re-imagining of the way we approach moral issues, emphasising that “our moral conceptual scheme needs to be altered, and with it, the way of life that has come to be taken for granted in our society” (p. 230). While issues of fairness also connect to issues of equity and equality, what was pertinent for my experience at the time was Singer’s description of the tendency of people to morally distance themselves from those in need because of actual physical distance between these people.

Zygmunt Bauman and Leonidas Donskis (2013) speak of something similar: adiaphora, the sense of moral blindness and loss of sensitivity amplified in digital life (Liquid Modernity). The authors’ perspectives are similar to Singer’s discussions in his essay in terms of the tendency of human beings to be less likely to act in aid because of the distance of those requiring our aid; less likely to feel compassion for those who are suffering when we can’t actually be in the presence of their suffering. These observations are pertinent, I think, to contemporary contexts where life is becoming a 24/7 monitored spectacle driven by popularity, titles, and profit; where people are more often than not wired into the latest gadgets to ward off boredom and to escape reality; and where rapid-fire information blasting through our mediatised existence allows companies access to our imaginations, helping them capture our attention to induce habit-forming connections to machines / internet (Johnson & Keane, 2017).[i] This onlifea hyperconnected existence (Floridi, 2015),[ii] seems to leave us little time to pause, to settle into our own thoughts, and to reflect on issues of social (and environmental) justice. Look for yourself at the state of the world we live in. Need I say more?

Our reliance on and addiction to digital technologies has not only been a catalyst for changes in the way we communicate with each other as Sherry Turkle (Connected But Alone) has often pointed out, this increasing simulation of life places us at risk of losing our sensitivity to the plight of the other. Am I pessimistic to wonder about whether we will get to a point where we are more likely to empathise with a robot or a Pixar character than we are with a starving child on the other side of the city, let alone a starving child on the other side of the world? Or are we already there?

Image 2: “We Want Fair AI Algorithms – But How To Define Fairness?” (Mostly AI)

 

I think my solo trip roused me from a blissfully ignorant semi-slumber. By the time I returned to Australia I had changed a lot. I had started to see the world differently to how my friends perceived it. I think I had also stepped onto the path of becoming more sceptical of the world. I had witnessed injustices oftentimes needlessly and carelessly inflicted upon the less fortunate, and I had developed a quiet indignation over such injustices—a restrained anger that even now, years later, continues to hum beneath the surface of my thoughts.

These differences in thinking became quite apparent when I spent time with my friends and family. It was hard for me to forget the things I had seen and heard during my travels. And while I tried to fit back into my old life in Australia, I felt like a stranger to my own social circle. I spent a few more years in Australia studying to be and then working as a primary school teacher (a profession still close to my heart) before leaving the country again. This time, however, when I left I had a toddler in tow.

 

Fairness and policy

I spent the next few years doing my postgraduate studies while working and raising a daughter. It wasn’t until I was back in Higher Education that I started thinking critically about how discrimination can be institutionalised through regulations and policy, and, as a consequence, practice. The catalyst for this realisation was twofold: a research project I worked on about leadership, policy, and equity in education; and my advocacy work around fairness when evaluating teaching qualifications of internationally educated teachers. While my backpacking adventures had me questioning fairness as a lived experience, returning to Higher Education had me thinking about fairness as an issue central to policy and governance—fairness as words (critical discourse) because words have power.

My most recent experiences doing research on technology, ethics, and education have reinvigorated my interest in fairness as a principle and a practice, which I will explore further in future updates. For this update, I wanted to spend some time offering you a little more history about my motivations, about how I came to design my Fair-AIEd project in the way I did. I hope to show how values and personal history can manifest in the design process, and then, of course, in the field, in the kinds of research questions I ask, in the populations I do research with, in the way I treat my research participants… which leads back to the question of bias of researcher which I wrote about in Project Update #4.

So how do my past experiences of injustice, whether it is harm experienced myself or what I witness being committed against other populations, shape the way I understand and enact fairness through my Fair-AI project? How do I see fairness compared to others? What considerations must I take into account when doing participatory and ethnographic research that will ensure accurate interpretation and fair representation of research participants’ lived experiences?—an issue of hermeneutics. How do I come to a definition of fairness that I can use to build into the Algorithmic Impact Assessment which is a major output of the Fair-AIEd project? (A culturally sensitive Algorithmic Impact Assessment application to help educators, leadership, policy makers, carers, and students make more informed decisions about what EdTech offer is best for them, and to protect them from predatory EdTech companies).

There is a growing body of interdisciplinary research on fairness which I am slowly working through. I’d like to highlight some texts I have found useful as a starting point for making sense of how fairness is conceived, perceived, and enacted in a range of settings. For example, fairness is a central consideration when building machine learning systems.[iii] This body of research tends to be focused on fairness in machine learning or fair algorithms as technical processes that are attempting to capture in code and then enact through these algorithmic systems fair treatment. Some helpful articles to read about the matter include Barocas et al (2020); Sánchez-Monedero et al (2020); Green et al (2019); and Chouldechova and Roth (2018) – for starters.

There are also philosophical, theological, and legal conceptions that include works by Thomas Aquinas, for instance, who advanced a theory of ethics based on distributive justice which concerns the way collective goods and responsibilities can be fairly distributed among people in a social community (Dierksmeier & Celano, 2021).[iv] There have also been egalitarian attempts seeking to develop theories of justice as fairness (e.g. Nussbaum, 2005, 2006; Dworkin, 1981; Sen, 1980; Rawles, 1971);[v] and feminist perspectives on fairness asking questions around how to “create fair terms of social cooperation among persons conceived of as free and equal citizens given that they are deeply divided over fundamental values?” (Watson, 2013, p. 36).[vi] Also of importance are race-specific perspectives and perceptions of fairness that include works related to fairness in the legal system (e.g. Kraus et al, 2019; Thomas, 2010; Hurwitz, & Peffley, 2005),[vii] and views of fairness from disability studies (Bennett & Keyes, 2019).[viii]

 

Image 3:  Fairness in governance

 

With regards to fairness in education, I’ve recently found a gem of an article on ethics and AI in education. I especially appreciated the following excerpt:

There is also the need to consider explicitly issues such as fairness, accountability, transparency, bias, autonomy, agency, and inclusion. At a more general level, there is also a need to differentiate between doing ethical things and doing things ethically, to understand and to make pedagogical choices that are ethical, and to account for the ever-present possibility of unintended consequences. (Holmes et al, 2021)[ix] (emphasis mine)

I can’t begin to stress enough how important critical literature on AIEd is. As Holmes has also pointed out, no rigorous frameworks guiding the use of AI in education exist; this is something we need to be working towards diligently to protect the digital rights of youth and young people (See General Comment on children’s rights in relation to the digital environment, UN).

I’d like to conclude my thoughts about fairness with an excerpt from a very easy article to read which I discovered in Psychology Today (an oldie but a goodie). The author identifies several approaches to fairness in applied ethics:

Sameness: This notion of fairness is grounded on the view that everything is equal. For instance, everyone would pay the same for a movie ticket, whether that person is a child or an adult. No person has more than another. Here, fairness is about finding an average which would then be applied generally. This form of fairness is as equality of outcome.

Deservedness:  Here fairness refers to individuals getting what they deserve. Fairness is keeping what you deserve, or deserving nothing if it is unearned. On this view, fairness becomes a rational calculation. This form of fairness is as individual freedom.

Need: The third notion of fairness revolves around justice and need. For instance, one view is that those who have more to give ought to give a larger percentage of their income to help those who are less fortunate. This is the form of fairness reflected in Singer’s work, which acknowledges that human beings have obligations to each another (and to non-human animals, as well as the natural environment). Fairness and responsibility are connected. Here, compassion plays an important role in the calculation of fairness. This form of fairness is as social justice.

It is fairness as social justice that I am interested in, but how to engage with it as praxis? That is what I must think about a little more. There are many other questions about fairness, particularly fairness within the context of education and technology (e.g. AIEd), that I’d like to raise here, but I need to leave a little room to update you on what we have been doing on the Fair-AIEd project this past month. I will end this section, then, with a quote about fairness from a movie that makes me wonder about why world systems have tended to be developed more ‘fair’ for some than others:

 

Nothing is fair in this world. You might as well get that straight right now.

–Sue Monk Kidd, The Secret Life of Bees

 

Recent Fair-AIEd activities

There has been a lot of forward movement in the project despite the setbacks encountered because of the pandemic. We’ve been designing the large scale survey which we hope to deploy across primary schools in Ghana in the next few months. Access to quality education has been affected by the pandemic, with poorer districts in the Northern regions of Ghana most affected. While our aim is to reach as many primary schools as we can realistically access using an online survey (there are approximately 18,530 primary schools in Ghana), we also intend to deploy vans to 100 remote communities across 24 districts, potentially reaching about 7,200 children to generate evidence pertaining to technology perception, understanding, and use. We intend to travel to these out-of-school populations to conduct our survey on paper, orally, etc.

While I have particular questions I wish to ask of research participants specific to the Research Questions guiding the Fair-AIEd project, as a colleague suggested, I have used the Practical Guide to Implement Surveys on ICT Use in Primary and Secondary Schools (UNESCO, 2020) as a reference point. It is my intention to also consider the issues identified by Holmes et al (2021) regarding ethical questions around AIEd as I develop the survey:

  • How does the transient nature of student goals, interests and emotions impact on the ethics of AIED?
  • How can K12 students give genuinely informed consent for their involvement with AIED tools?
  • What are the AIED ethical obligations of private organisations (developers of AIED products) and public authorities (schools and universities involved in AIED research)?
  • How might schools, students and teachers opt out from, or challenge, how they are represented in large datasets?
  • What are the ethical implications of not being able to easily interrogate how some AIED deep decisions (e.g., those using multi-level neural networks) are made?
  • What are the ethical consequences of encouraging students to work independently with AI-supported software (rather than with teachers or in collaborative groups)? vii]

Relatedly, I was over the moon to find the Fair-AIEd project had snuck into a UNESCO document about AI and policy-making: AI and Education – Guidance for Policy Makers (2021). The document offers a useful framework for making sense of how to regulate AI in education while also taking a critical stance to present arguments for and against the use of AI in education. It will be interesting to see how proponents of AI in education – specific to educational assessment – will address the issue of AI applications in education given the EU has designated AI as ‘high-risk’ in the recently released legal framework — Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (2021):

(35) AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and used, such systems may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination. (p. 26) (emphasis mine)

And as per the rules and actions which seek to position Europe as the “global hub for trustworthy Artificial Intelligence (AI),” high-risk AI systems will be subject to strict obligations before they will be given permission to be put on the market:

  • Adequate risk assessment and mitigation systems;
  • High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
  • Logging of activity to ensure traceability of results;
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
  • Clear and adequate information to the user;
  • Appropriate human oversight measures to minimise risk;
  • High level of robustnesssecurity and accuracy. (European Commission, 2021).

I look forward to seeing how this process unfolds, how its recommendations will be enacted in the AIEd sub-domain in the UK, and what lessons we might draw from this event for Official Development Assistance (ODA) contexts such as Ghana and South Africa.

I was also invited to participate in a public debate during the International Seminar on Internet Governance hosted by Escola de Governança da Internet (EGI.NIC.BR) and Comitê Gestor da Internet no Brasil (CGI.BR). I was fortunate to be on the panel about Platforms, Power, and Surveillance with Ulises Mejias (Suny, Oswego) and chaired by the lovely Fernanda Bruno (Federal University of Rio de Janeiro / Founding Member of the Latin American Network of Studies on Surveillance, Technology and Society – Lavits). During our debate, we explored ideas such as data colonialism and bio-surveillance, including discussions on how one might regulate infrastructure in a way that is democratic, fair, and equitable for all. Ulises expanded on his timely work with Nick Couldry (LSE) on data colonialism and rationalities of extraction.

My own focus was on AI in education, specific to the need to regulate these systems by attending to their affective and biometric surveillance processes. Also addressed was the issue of platforms and monopoly, which I discussed using the example of the new Jeff Bezos Public-Private Partnership (P3/PPP) seeking entry into education under the name of EverFi (Image 4). These kinds of P3s are entities we should be focusing on more closely given a) Amazon’s human rights record and b) that Amazon et al are developing an intelligent system as infrastructure for Education that tracks (and has the power to change) moods and, therefore, nudge people to behave in particular ways.

 

Image 4: Bezos ventures into AI and education with a ‘Digital Wellness Network’ P3 initiative

 

I came away from the Internet Governance discussion with some excellent recommendations for reading, two of which I include here:

I best end my update now as I’ve asked for your attention for long enough. Thank you for reading along as I think about my research out loud.

Till next time!

– Selena

 

 

—————————————————————————————————————————————————

References

[i] Johnson, N. F. & Keane, H. (2017). Internet Addiction? Temporality and life online in the networked society. Time & Society, 26(3), 267-285.

[ii] Floridi, L. (2015). The Onlife Manifesto: Being Human in a Hyperconnected Era. Retrieved April 25, 2021 from: https://link.springer.com/book/10.1007/978-3-319-04093-6

[iii] Jennings, J. (2021). Finding Fairness: From Pleistocene Foragers to Contemporary Capitalists. Miami, FL: University Press of Florida. doi:10.2307/j.ctv1hp5h7n

[iv] Dierksmeier C, & Celano, A. (2012). Thomas Aquinas on justice as a global virtue in business. Business Ethics Quarterly, Reviving Tradition: Virtue and the Common Good in Business and Management, 22(2), 247-272

[v] Amartya, S. (1980). Equality of What? The Tanner Lectures on Human Values Ed. S.M. McMurrin. Cambridge, UK: Cambridge University Press; Dworkin, R. (1981). What is equality? Part 1: Equality of welfare. Philosophy & Public Affairs, 10, 228-240; Dworkin, R. (1981). What is equality? Part 2: Equality of resources. Philosophy & Public Affairs, 10: 283-345; Rawles, J. (1971). A Theory of Justice. Oxford, UK: Oxford University Press.

[vi] Watson, L. (2013). Toward a feminist theory of justice: Political liberalism and feminist method. Tulsa LawReview, 46(1), 34-44; Nussbaum, M. (2006). Frontiers of Justice: Disability, Nationality and Species Membership. Cambridge: Harvard University Press. Retrieved April 20, 2021 from: https://digitalcommons.law.utulsa.edu/tlr/vol46/iss1/7; Nussbaum, M. (2005). Capabilities as Fundamental Entitlements: Sen and Social Justice, in B. Agarwal et al. (Eds) Amartya Sen’s Work and Ideas: A Gender Perspective, pp. 35-62, New York, NY: Routledge.

[vii] Hurwitz, J. & Peffley, M. (2005). Explaining the great racial divide: Perceptions of fairness in the U.S. criminal justice system. The Journal of Politics, 67(3), 762-783. Retrieved April 6, 2021 from: https://bit.ly/3ufV90o; Thomas, C. (2010). Are Juries Fair? Retrieved April 6, 2021 from: https://bit.ly/3aSlDxv; Overby, L., Brown, R., Bruce, J.,  Smith Jr. C., & Winkle III, J. (2004). Justice in black and white: Race, perceptions of fairness, and diffuse support for the judicial system in a southern state. The Justice System Journal, 25(2), 159-182; Kraus, M., Onyeador, I., Daumeyer, N., Rucker, J., & Richeson, J. (2019). The misperception of racial economic inequality. Perspectives on Psychological Science, 14(6), 899-921.

[viii] Bennett, C.L. & Keyes, O. (2019). What is the point of fairness? Disability, AI and the complexity of justice. Retrieved April 10, 2021 from: https://arxiv.org/ftp/arxiv/papers/1908/1908.01024.pdf;Givens, A. & Morris, M. (2020). Centering disability perspectives in algorithmic fairness, accountability, & transparency. FAT* ’20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. Retrieved April 10, 2021 from: https://doi.org/10.1145/3351095.3375686

[ix] Holmes, W., Porayska-Pomsta, K., Holstein, K. et al. (2021). Ethics of AI in Education: Towards a Community-Wide Framework. International Journal of Artificial Intelligence in Education (2021). https://doi.org/10.1007/s40593-021-00239-1

 

Selena Nemorin

Selena Nemorin

Author

Dr Selena Nemorin is a UKRI Future Leaders Fellow and lecturer in sociology of digital technology at the University College London, Department of Culture, Communication and Media. Selena’s research focuses on critical theories of technology, surveillance studies, tech ethics, and youth and future media/technologies. Her past work includes research projects that have examined AI, IoT and ethics, the uses of new technologies in digital schools, educational equity and inclusion, as well as human rights policies and procedures in post-secondary institutions.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Recent Posts

Subscribe to Notifications

Send us your email details and we will let you know when a new post is published.