by | 8th November 2020 | Project Update

I had wanted to begin my project update with an apology for being late. After all, I did say I intended to post one on the first of every month … But when the media finally called the US election yesterday, good news which I am still celebrating, and given the nail-biting days and nights I’d scour the internet for things related to the US election previous to that, I am – this one and only time – respectfully unapologetic. I hope you understand why. I did try to write my update during spare pockets of time (usually in the mornings), but other factors seemed determined to prevent me from sitting with my thoughts for long enough to write something worth writing about. Until now.

As I mentioned in my first project update, my aim with this website is to have a place where I can share my research experiences and processes with others; this would include identifying and discussing opportunities and challenges as my programme of research unfolds. As I write my project update for November, I’m thinking about something I teach my students in my research methods modules: the need for transparency when doing research on and for education. Transparency is central to The Turing Way. The website states:

The Turing Way is an open source community-driven guide to reproducible, ethical, inclusive and collaborative data science.

Our goal is to provide all the information that data scientists in academia, industry, government and the third sector need at the start of their projects to ensure that they are easy to reproduce and reuse at the end.

The book started as a guide for reproducibility, covering version control, testing, and continuous integration. However, technical skills are just one aspect of making data science research “open for all”.

I’m especially interested in the section of the Guide that discusses guidelines for ethical research. How might these guidelines apply to research in the social sciences, especially when doing research with children, with vulnerable populations, and/or historically marginalised groups?

In this guide, we invite discussions and guidance on ethical considerations that a data scientist should keep in mind to ensure not only that their work maintains a high level of moral integrity, but also that their work is carried out at the highest scientific standards.

After reading this, I started to think about what moral integrity might mean in the context of my project as a school ethnography and empirical reproducibility:

Empirical reproducibility: When detailed information is provided about non-computational empirical scientific experiments and observations. In practice this is enabled by making data freely available, as well as details of how the data was collected.

I’ve also started to read a little more about reproducible but not Open. Again from The Turing Way:

The Turing Way recognises that some research will use sensitive data that cannot be shared and this handbook will provide guides on how your research can be reproducible without all parts necessarily being open.

So in line with The Turing Way, I’d like to keep working towards transparency in educational research by sharing with you the trajectory of my thinking and writing—from the beginning stages of condensing years of research, critical questions, and transformative educational practices into a handful of documents, eventually cohering into the project you now know as Fair-AIEd.


How did I start writing my research proposal? I’ve been asked this question on several occasions, and it’s been more difficult to answer than I thought it would be. To answer, I’d have to locate the beginnings of my proposal, but what marks the beginning? Am I the beginning inasmuch as my project is the synthesis of my values, beliefs, research interests, experiences, and so on over the years? In this capacity, can genesis be traced to my identity in terms of Ricoeur’s distinction between idem– and ipse-identity (de Vries, 2010), or what makes me me (if I could find this at all)? Or is the beginning something more tangible? A picture? A word? Or perhaps a policy that acted as the muse which drew my ideas into synthesis?

I guess I began ‘writing’ my proposal as a result of an observation I made at the IoT Week conference in Geneva in 2017. I was working at the London School of Economics at the time on the Virt-Eu project. I’d gone to the conference to share my initial thoughts on drones, privacy and ethics in smart city systems. I was also there to do fieldwork which meant interviewing research participants about responsible/ethical innovation and IoT. While the conference was undoubtedly very interesting in terms of making sense of the ethics of new technologies in the European context, during my time there it did not escape me that there were certain groups who were quite vocal about making decisions on behalf of Global South populations who should have been right there speaking for themselves, but were not.  This event perhaps marks the point in time when I started to think more about fairness in the context of new technologies in education. However, if I were to presume a more concrete beginning for the sake of simplicity, I think it would be reasonable to say that the beginning of Fair-AIEd is best captured in a document I wrote in March 2018 which summarised my research proposal on AI, ethics, and education.

Since then, I’ve been developing my ideas; reading widely and more closely; and focusing on ODA (Official Development Assistance) countries as I move away from western contexts. I’ve searched for works I thought would be salient to my project, whether academic articles, books, or online blog posts. Mostly, I’ve been thinking about how to stack theories (yes, I am guilty of theory stacking) into a framework that agrees with Hannah Arendt’s understandings of the more liquid aspects of language which at times randomly meaning-shift into something unexpected. As Champlin (2013) writes of Arendt’s work:

…she also employs natality in conjunction with the empty word “fact” to work out a uniquely free practice analogous to the materiality of language in how it both relies on specific formulations drawn from the past but can also at any moment leap to new meanings on the model of metaphor and citation. (p. 152)

What follows is a copy and paste of the summary document I authored in early 2018 which I continued to develop into my Fair-AIEd project:

AI, Education, and Intercultural Ethics

To benefit from the potential of AI in education, we must move beyond the search for more computational power or problem-solving capacities (IEEE, 2016)[i] ensuring that these technologies are aligned with moral values and ethical principles of a ‘Good AI Society’ (Floridi, 2018)[ii]. AI systems must work toward benefiting human beings and the environment in a way that reaches beyond addressing functional goals and technical problems. As such, we must engage more critically with these emerging technologies, interrogating their properties and the social constructs they maintain, perpetuate, and disrupt (Kalantzis-Cope, 2011)[iii]. The aim, then, of this project is to understand the dynamics of ethics and AI on several levels:

  • Ethics by Design: the technical/algorithmic integration of ethical reasoning capabilities as part of the behaviour of artificial autonomous systems;
  • Ethics in Design: the regulatory and engineering methods that support analysis and evaluation of the ethical implications of AI systems as these integrate or replace traditional social structures;
  • Ethics for Design: the codes of conduct, standards and certification processes that ensure the integrity of developers and users as they research, design, construct, employ and manage AI systems. (Dignum, 2018, p. 2)[iv]

Research Question: What are the social and ethical implications for education and training in and with AI, including those relating to accountability, responsibility, and transparency?

Using the model of public engagement (PE) in Responsible Research and Innovation (EU Commission, 2018)[v] the project seeks to facilitate the co-creation of an AI future with diverse actors (e.g. school community members, researchers, industry, policymakers, NGOs, civil society organisations), advancing public discussions about how to create social and ethical implementations for AI systems in education, and aligning them with the values of ‘human flourishing’ (eudaimonia). As articulated by Aristotle, the path to eudaimonia begins with critical reflection and ethical considerations that help us define how we ought to live (Annas 1987–1988)[vi]. In the context of AI in education, by aligning the design, development, and application of AI with the values of its users this project aims to prioritise human flourishing as a key metric for progress. (2017-2018)

Approaches to contextualising research through classroom practice

Also relevant is my summary of the kinds of classroom activities I might conduct as the data collection piece of digital sociology, focusing on three areas: affective (emotions, moods, feelings), pedagogical, and economic. I’ve been working on these ideas for a while, and began articulating them in a blog I wrote during my time at Monash University in Australia. Link is here. I have copied and pasted the preliminary document for those of you interested in the more granular details of my project for the sake of reproducibility, one of the fundamental principles of Open:

The social and ethical implications of embodied AI in formal education.

In light of the growing interest in the use of embodied AI in the classroom, the project will examine the social and ethical implications of the use of intelligent robots in primary and secondary school contexts. Educational researchers have claimed that social robots (e.g., Nao and Pepper) provide innovative means for teaching and learning with large groups, they possess interactive capabilities that appeal to students’ emotional responses, and they have the capacity to engage a range of student populations. The use of autonomous interactive robots in learning contexts certainly offers many benefits. But as autonomous robots are increasingly developed to assist human beings with day-to-day tasks, questions regarding the implications of human-robot interactions must inevitably be addressed.

Although some scholars have explored the ethics of robot-student interactions, sustained research on the implications of robots in school environments has been lacking. Important questions that must be asked centre on the social and ethical impacts of using autonomous robots in school settings. The project will pay attention to three dimensions of human-robot interactions: affective, pedagogical, and economic.


  • Examine how students make sense of, trust, and engage with robots in knowledge spaces;
  • How will digital human beings and anthropomorphised embodied AI create and engage with emotions?


  • Gain new insights into the effectiveness of autonomous robots as mediators for adaptive learning in educational institutions;


  • Identify how robots are being used in educational institutions – what work is being done by robots?
  • Examine the added economic value of having robots in these settings;
  • What are the implications for teachers (e.g. deskilling and de-professionalisation)?

Guiding Research Questions:  What assumptions about human behaviour and intelligence underlie current embodied AI development and innovation? How are social values embedded and manifested in AI design? How might frameworks grounded on responsible innovation be integrated with these assumptions to transform how AI innovators/developers make decisions when designing for educational AI?

Methods: The program of study will adopt a multi-layered approach across three settings:

  1. Regulatory landscapes (encompassing global institutions such as UNESCO, national education policy frameworks, and school specific policies);
  2. Design/development (focusing on industry); and
  3. Schools and schooling.

Stage 1 will map AI in education using a political economy method (Mosco, 2009) to trace the emergence of key actors in the design and development of educational AI.

Stage 2 will use both qualitative and quantitative methods to generate data about the implications of AI in schools. Detailed ethnographic case studies of AI design/development and use will be employed to make sense of how different actors negotiate competing discourses in regards to new technologies and innovation. Overall, the analysis will employ conceptual tools drawn from institutional ethnography (Nichols & Griffith 2009), political economy, and discourse analysis (Fairclough, 2003).

Stage 3 will involve policy-makers, industry, and school staff in focus groups exploring project findings. The aim of this participatory design phase is to develop new understandings of embodied AI in education that can inform responsible development, equitable pedagogical practice, inclusive policy decisions, and opportunities for meaningful learning. (2018)


As I watched the elections in the US in the middle of looking more closely at the various kinds of public-private partnerships (P3) currently discussing educational technology and SDG4, I have also been wondering about the role of power as central to my research focus, a matter that was raised during my kick-off meeting back in September. So what is power? How is it manifested? How is power exercised through policy making? At the moment, I’m thinking through what power might mean against Foucault’s representations of power in The Order of Things where he focuses on words, and in his later works where he reflects on power as a Leviathan wielding  sovereign power which has come to comprise two forms: 1) disciplinary power or anatomo-politics given its aim of training the human body (Discipline and Punish) and 2) bio-politics or bio-power (History of Sexuality). Given what I have been thinking about in terms of the ecological framework of this project, including Latour, and with attachments to post/de-colonial theories and political economy, how does power fit? Is Foucault’s illustration of power and discourse sufficient to account for how power is enacted through the digital, a way of being online/offline that cannot be captured properly using more traditional frameworks for analysis? See here and here.

Prior to Foucault, philosophers tended to agree that power had an essence, such as sovereignty or mastery. Max Weber, for instance, depicted the power of the state as comprising a “monopoly of the legitimate use of physical force.” Thomas Hobbes presented the essence of power as sovereignty of state. On his view, at its best power would be exercised from the singular position of sovereignty which he called “The Leviathan” (Koopman, 2017). Within and through this leviathan would bodies be disciplined in order to produce obedience. While I do think some of Foucault’s constructs may not be as useful for understanding the current digital climate as they once were (e.g. theorising around the panopticon), his conception of power remains important for critical analysis of AI in education. Koopman writes:

Classically, power took the form of force or coercion and was considered to be at its purest in acts of physical violence. Discipline acts otherwise. It gets a hold of us differently. It does not seize our bodies to destroy them, as Leviathan always threatened to do. Discipline rather trains them, drills them and (to use Foucault’s favoured word) ‘normalises’ them. All of this amounts to, Foucault saw, a distinctly subtle and relentless form of power. To refuse to recognise such disciplining as a form of power is a denial of how human life has come to be shaped and lived. If the only form of power we are willing to recognise is sovereign violence, we are in a poor position to understand the stakes of power today. If we are unable to see power in its other forms, we become impotent to resist all the other ways in which power brings itself to bear in forming us. (ibid)

Scholars have also spoken about power in terms of data logics, including big data practices, the freeing yet controlling potential of social media, and how these things play out in postcolonial contexts as well as their social and ethical implications for historically marginalised groups (Gandy, 1993; Mbembe, 2003; Noble, 2018; Couldry & Mejias, 2019).[vii]

For me, when examining how power might be exercised, we must first be able to appreciate the people and processes involved in setting the normative codes that frame the way we can or cannot behave in the world of things—in the case of Fair-AIEd, who sets the norms and codes for how educational technologies are designed, managed and used? How are these issues being discussed? Preliminary analysis at the level of discourse (thematic analysis) has helped me make some sense of the general landscape of AI in education. My next steps are to identify and home in on particular contexts that may prove fruitful avenues for further exploration. This brings me to a Small Grant Fund I received in 2018[viii] to continue to develop my UKRI research project. The title of my seed project proposal: Mapping the implications of AI in educational development discourses: Public-private partnerships.

Public-Private-Partnerships (P3): AI, ethics, and educational development

My intention in this stage of my research design process was to identify how AI in education and development was being framed in popular P3 discourse:

Research question: What are the social, political, and ethical implications emerging from the use of AI in educational development?[ix]

Rationale: Artificial intelligence in education (AIEd) is celebrated as a pathway to gaining deeper understandings of how learning occurs (Luckin, 2016 ). These machine learning systems can also be viewed as contributing to the trend of framing the purpose of education as a dynamic of market mechanisms, including measurements, standardisation, commercialisation, and competition (Biesta, 2009 ). However, the adoption of such mechanisms as solutions to the challenges facing educational development have a potential to reproduce a political, economic, and social order that may lead to forms of exclusion and discrimination (Locatelli, 2018). The consequences of which are at odds with ensuring “inclusive and equitable quality education and promote lifelong learning opportunities for all”, an Education 2030 Sustainable Development Goal (SDG4) and its corresponding targets (UNESCO, 2015). This research will map the social, political, and ethical dimensions of AI technologies in educational development contexts to better understand and investigate this challenge/tension. It will use a mixed method approach that combines data visualisation and critical discourse analysis, to examine multi-scalar documents emerging from public-private partnerships (P3).

Aim: This project aims to analyse the social, political, and ethical dimensions of AI technologies in educational development contexts.

The project objectives are to:

  • Engage in a preliminary evidence gathering process in this innovative and emergent topic;
  • Establish a new interdisciplinary network of expertise (within UCL and beyond) to examine issues of socio-technical governance;
  • Develop and test digital methodologies to render visible the assumptions and intentions embedded in the discourses around AI technologies for educational development;
  • Build a foundation for an externally funded research project proposal.

Phase One

Conduct sentiment analysis, geoparsing, and text mining over a collection of educational technology policy/practice documents from P3 contexts (e.g. UNESCO, OECD, Pearson, UCL) to trace value propagation in debates about AI for educational development. Data mining will focus on the following objectives:

  • Identify the range of countries and kinds of educational contexts in which AIEd is being utilised;
  • Identify values and assumptions embedded in design, development, and application of these systems (e.g. marketisation, individualism, equity, etc);
  • Identify the benefits and challenges such systems might have across international education contexts;
  • Identify gaps between the focus of academic, state, and industry AI research communities in order to understand areas that may warrant further research attention.

Phase Two

Building on phase one, data is mined and collated. We will develop a data visualisation to represent the discursive horizon of the educational AI landscape. Data maps will be used to foreground and trace the initiations, development, and adoption of particular kinds of values and aims, and how these ideas have been propagated across P3 discourses. A critical discourse analysis will be applied to these maps to examine the political economy of AI technologies in educational development contexts.

How do you plan to develop it further for a larger external grant application?

The proposed study is designed to test and develop a methodological grounding for a larger project that examines the relationship between AI processes and practices, and educational development. Ultimately, that project will address the broader question concerning politics of artificial intelligence in education.

This larger project would centre on a study of the relationship between global governance institutions, AI industry infrastructures, academic communities, and local educational contexts. The central research questions driving the project will be:

  • What are the social, political, economic, and ethical implications of importing AIEd systems into international education contexts, including developing countries?
  • How can P3 partnerships most effectively channel machine learning to enhance education as a “public good”?
  • How can governments facilitate the creation of ethical AIEd policies for educational development aims?
  • What digital tools and methods can we employ to investigate the social, political, and ethical implications of AI in educational development?

The aim of the larger project is both conceptual and policy driven. Conceptual in that it seeks to contribute to the understanding of educational technology issues and their impact on the purpose of education. Policy driven (or applied) in that it seeks to mobilise social science evidence to facilitate the creation of ethical policies and practices for educational development aims (e.g. UN SDGs). It will build on the methodological tools developed in the pilot project, and engage in a participatory approach to develop new tools such as an AI Impact Assessment that schools might use to assess the impact of AIEd systems as well as their utility.

Which external funder do you expect to submit your full grant application to, and why?

The full grant application would be submitted to the UKRI Future Leaders Fellowships scheme (Round 3, June/July 2019: The Fellowship is an award that offers funding for early career researchers to undertake interdisciplinary research which addresses important social questions.

This project has potential to have a direct influence on national and global educational development policy, as well as on the behaviour of various organisations. It is ambitious in that it would seek international and interdisciplinary collaboration in order to answer a pressing social challenge with regards to ethical AI innovation in education.

Deliverables: (What specific outcomes do you expect from the project?)

This proposed project will produce the following outcomes:

  • A new interdisciplinary network of expertise (within UCL and beyond) to examine issues of socio-technical governance;
  • A foundation for an externally funded research project proposal;
  • A conference paper to be revised and submitted as an academic paper to a high-impact journal. The focus of these papers will be a critical analysis of research findings. (2018)

At a more practical level, while a new lockdown has put further delays on my ability to be in the field, I’ve grown more patient and more likely to not be surprised at delays in project administration tasks. I have learned, thank goodness, to not worry so much and occupy myself, as I wait, with other aspects of my project that will need to be attended to at some point or another anyway.

The recruitment process is underway and an advertisement for a postdoctoral researcher (school ethnography in Ghana) has been posted on Twitter. Please do share the advertisement with others as it offers a secure full-time post for those interested in examining issues of ethics and technology through a critical lens. Link here.

I also submitted a poster about my Fair-AIEd project to the UKRI FLF Annual Cohort Event 2020. Link is here.

So there you have it. For this month’s update, I have shared information about the foundations of my Fair-AIEd in light of Open (transparency and reproducibility). And now? I continue to think about how I might develop and apply theoretical frameworks to examine AI in education and development, as well as add more detail to the research methods I‘d like to use for doing [digital] sociology in schools. As for where to next? The answer to this I will save for another day. I will, however, leave you with two quotes that have long resonated with me as a teacher:

There can be no keener revelation of a society’s soul than the way in which it treats its children.

— Nelson Mandela

When you love what you do, more than likely everything is gonna just come out decent.

— Ghostface Killah (Wu-Tang Clan)

Till next time!



[i] IEEE. (2018). The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Retrieved from
[ii] Floridi, L. (2018). Soft ethics and the governance of the digital. Philosophy of Technology, 31, 1-8.
[iii] Kalantzis-Cope, P. (2011). Properties of technology. In P. Kalantzis-Cope & K. Gherab-Martin (Eds.), Emerging Digital Spaces in Contemporary Society: Properties of Technology (pp. 3–9). London, UK: Palgrave Macmillan.
[iv] Dignum, V. (2018). Ethics in artificial intelligence: introduction to the special issue. Ethics and Information Technology, 20, 1–3.
[v] EU Commission. (2018). Responsible Research and Innovation. Retrieved from
[vi] Annas, J. (1993). The Morality of Happiness. New York, NY: Oxford University Press.
[vii] Gandy, O.H. Jr. (1993). The Panoptic Sort. Routledge, New York, NY; Mbembe, A. (2003). Necropolitics. Public Culture 15(1), 11-40; Noble, S.U. (2018). Algorithms of Oppression. NYU Press: New York, NY; Couldry & Mejias, U. (2019). The Cost of Connection. Stanford University Press: Stanford.
[viii] Seedcorn grant (2018): Department of Culture, Communication and Media (UCL). Awarded as an interdisciplinary project, with my colleagues Doctors Andreas Vlachidis (UCL) and Ana Basiri (University of Glasgow).
[ix] I might have been a little naïve at the time of writing the research question for this seed proposal. I very quickly realised that there was no way in the world I could properly address all of these dimensions in one seed project.

Selena Nemorin

Selena Nemorin


Dr Selena Nemorin is a UKRI Future Leaders Fellow and lecturer in sociology of digital technology at the University College London, Department of Culture, Communication and Media. Selena’s research focuses on critical theories of technology, surveillance studies, tech ethics, and youth and future media/technologies. Her past work includes research projects that have examined AI, IoT and ethics, the uses of new technologies in digital schools, educational equity and inclusion, as well as human rights policies and procedures in post-secondary institutions.

1 Comment

  1. Avatar

    Interesting history in the field of artificial intelligence. I can’t claim to understand everything you are saying, but I can now understand the direction of this study. I admire the confession of probable ideas and the quest to understand and create guidance in the field of IoT in education. It is something new in Ghana and can open doors to a lack of framework observed during the lockdown period choice of direction in the use of digital technology in education.


Submit a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Recent Posts

Subscribe to Notifications

Send us your email details and we will let you know when a new post is published.