PROJECT UPDATE #6

by | 1st February 2021 | Project Update

Preliminary findings

… survival is not an academic skill. It is learning how to stand alone, unpopular and sometimes reviled, and how to make common cause with those others identified as outside the structures in order to define and seek a world in which we can all flourish. It is learning how to take our differences and make them strengths. For the master’s tools will never dismantle the master’s house. They may allow us temporarily to beat him at his own game, but they will never enable us to bring about genuine change.

Audre Lorde

When Audre Lorde made this declaration, she was suggesting that it is impossible to disrupt relations of oppression if individuals are restricted to operate only within the logic that justifies their very oppression. She was proposing an alternative vision for ethical life as a way of bringing about “genuine change” through social justice. Lorde called for us to embrace difference and not “merely tolerate” people who are different, because it is precisely this difference that offers a “fund of necessary polarities between which our creativity can spark like a dialectic.”

Brazilian philosopher of education Paulo Freire (1970) was of similar thinking. In his seminal work, Pedagogy of the Oppressed, Freire envisioned an education for conscientización, a way of raising critical awareness of important social and ethical issues for positive change. Freire claimed that our social existence is shaped by acquired social myths which in turn shape the way we think and behave. Critical reflection is key to making sense of this world and the various structures that bond it together into a socio-technical system, only then can we begin to uncover actual problems and actual needs.

The overarching aim of the Fair-AIEd project is to identify actual problems and actual needs, then meaningful processes and practices for driving ethical technological change in education. As well as the implications of AI systems for teaching and learning, the project examines potential benefits, harms, and risks associated with leadership roles P3s play in the design and use of AI infrastructures for education. To work towards answering this, WP1 started with identifying key concepts through a horizon scan of education and development initiatives specific to P3s and AI. The findings from WP1 form the basis for subsequent socio-technical analyses. While all three of the project’s guiding research questions are in the back of my mind, my discourse analysis for WP1 has been driven by trying to understand context before attempting to answer RQ3 in particular: How can governments facilitate the creation of ethical AIEd policies for development goals?

One of the first WP1 research tasks I started with was collating a corpus of multi-scalar public documents I’d been slowly collecting for a handful of years. My focus was on AI, education, development, and ethics. I won’t go into detail about methods here, but I’d still like to share a few of the strategies I’ve used for data collection, organisation, and analysis. My aim was to obtain a snapshot of the discourse horizon of AI in education. I started with a list of keywords my UCL colleague, Dr Andreas Vlachidis, used to run a word frequency-scrape of the text-based corpus for data relevant to the project’s focus. Andreas worked with me on my SEED project that led to the Fair-AIEd proposal. He and I, along with our colleague Dr Panos Panagiotis (UWE), are in the process of completing a paper about our methods and findings. We hope to have the paper ready for publication in the next few months.

 

KEYWORD TABLE

Keyword CombinationNumber of Matches Keyword CombinationNumber of Matches
AI18110AI + GDP8
AI + development461AI + SDG (or sustainable development goals)7
AI + education416AI + education + development4
AI + efficiency / efficient188AI + education + inclusion / inclusive4
AI + ethics172AI  + scalable2
AI + growth166AI+ liberal/ism2
AI + innovation148AI + ethics + inclusion / inclusive1
AI + market92AI + trust44
AI + Africa89AI + digital divide1
AI + India26AI + underserved1
AI + competition33AI  + freedom0
AI + profit8AI + equity/equitable13
AI + personalisation16*AI + education + fair6*

 

Table 1: Keywords used for frequency-based corpus analysis

 

Since conducting this sweep of text, I’ve collected a number of other documents. Reviewing these documents as a new whole has made me realise that there are certain keywords I should have used for my initial language search. I will address this in the next pass I have at the discourse where I will be focusing more on the issues identified as significant in the initial AIEd horizon scan.

The frequency-based corpus analysis tasks gave us a broad overview of the contents of our text collection. The tasks had a confirmatory value, reflecting the most frequent words and groups of words in documents. A preliminary sweep produced a list of 26 initial keyword combinations using “AI/Artificial Intelligence” as the base-keyword and a range of other keywords as contextual triggers. Table 1 presents the keyword combinations and the number of matches across the document collection for each combination. Our results confirmed that the documents in their majority contain information relevant to the topic of investigation. Blind bottom-up, frequency-based measures are capable of delivering useful abstractions about the main contents of a corpus, but do not necessarily reveal targeted pieces of information which may carry particular interest for a study.

A Keyword in Context extract task (KWIC) extracted 150 character long pieces of text which contained the base-keyword and any contextual keyword located five places to the left or to the right from the base-keyword. The extracted pieces were then cast into HTML pages which compiled the extracted information of each document in the collection.  Figure 1 presents an example of an extract that shows the keyword phrase, the immediate context of the keyword phrase, and the larger section from which the phrase was extracted.  Extracts were used to support quick inspection of documents and draw attention to passages of information which potentially carry interesting elements meriting further attention.

 

Figure 1: Keyword in Context (KWIC) extract for the combination “AI + education”

 

When reviewing the corpus, comprising samples from both popular and academic discourse, it’s clear to see that over the last few years Artificial Intelligence (AI) has taken centre stage in debates about how school governance, pedagogy, and learning can be rebooted to fit students with the 21st Century Skills essential for participation in larger society. As Audrey Azoulay (2018), the Director-General of UNESCO, claims: “Education will be profoundly transformed by AI… Teaching tools, ways of learning, access to knowledge, and teacher training will be revolutionized.”

In order to realise this vision, public and private partnerships (P3s/PPP) are being established to manage AI initiatives in attempt to spur digital transformations in education, innovation, and growth. In the discursive space, AI is being presented as having a range of benefits, such as the potential to accelerate attainment of global education goals by reducing barriers to access education; automating administration and management, as well as teaching and learning processes; and optimising methods for improving learning measurement and outcomes. AI in education has also been attributed with the potential to close the digital skills gap and narrow the digital divide.

At this point, however, it also appears that many of the claims of the revolutionary potential of AI in education are based on conjecture, speculation, and optimism, without sufficient attention paid to critical study of educational technology in schools and schooling. Given my reading of the field of action so far, there doesn’t appear to be enough social science research conducted in schools that can be used to justify widespread claims of AIEd excellence. Much of what exists now as “evidence-based” is mostly related to how AI can work in education in a technical capacity without pausing to ask and comprehensively answer the question of whether AI is needed in education at all?

Recent critical edtech scholarship has tried to move beyond asking questions limited to what works. To use a recent example, in their critical analysis Huw Davies, Rebecca Eynon, and Corey Salveson (2020) develop a ‘knowledge graph,’ an online tool to examine how stakeholders in edtech position themselves. Through mapping nodes and connections, the authors were able to identify the main concepts collectively promoted and the incentives for doing so.

I’ve been exploring a similar tool for data visualisation which allows me to trace relationships across nodes as possible leads for further inquiry. While there are many benefits (and challenges) associated with using these tools for data collection, for me the most helpful point of entry into making qualitative and quantitative sense of biopower (biopolitics), a research interest of mine, is through Named Entity Recognition (NER), a social network visualisation that allows me to trace on a map societal ties across P3s. In the spirit of STS, and borrowing a metaphor from cartography, this method of data visualisation seeks to first render the social world flat to ensure that emergence of new connections between nodes are made visible despite how faint their traces may be. I also found using semantic scraping tools from the computer sciences to be a boon when searching for useful data and potential research leads.

Biopolitics and biopower

Biopolitics can be viewed as an ethico-political rationality that takes the administration of populations and life as its subject. Biopower refers to the process in which biopolitics is operationalised in society, moving through dispersed networks, or what Foucault would call the dispositif. Biopower is a power that, for Foucault (1976):

…exerts a positive influence on life that endeavours to administer, optimize, and multiply it, subjecting it to precise controls and comprehensive regulations. (p. 137)

Biopower is comprised of two basic modes: disciplining of the individual body and regulatory control of the population. As I am concerned with the role AI systems play in the formation and governance of manageable subjects, a general intent is to make sense of how structures of biopower might foreground techniques of biopower: “the range of practical mechanisms, procedures, instruments, and calculations through which authorities seek to guide and shape the conduct and decisions of individuals and collectives in order to achieve specific objectives” (Lemke, 2016).

Central to identifying how biopower asserts itself in the field of AIEd begins with developing an understanding of the field within which this power is situated, and how power acquired through ownership of ‘knowledge’, or knowledge/power, is able to define the parameters that guide social behaviours, including setting ethical and regulatory norms. Several concepts drawn from biopolitics have the potential to reveal the technologies of power that play a key role in shaping what counts as truth; therefore what counts as knowledge and reality (Foucault, 1977). These concepts have guided my process of inquiry which is unfolding organically with each new lead appearing as a dynamic node on my map. One of these pathways of inquiry led to identification of patterns in the United Kingdom (UK), so instead of focusing on the context of Ghana and South Africa first, my research methods and emerging themes got me stuck in the groove of AI, ethics, and education in the UK.

Given my project is informed by postcolonial and decolonial theories, it makes sense to begin with the Centre [of Empire], to understand the ethico-political economy of communication in this context before branching outwards to Africa as Periphery. My initial findings have allowed me to obtain a general overview of the discourse horizon of AIEd. What is being discussed presently can be captured by three broad themes which underpin the rhetoric of using AI for the betterment of education and society more broadly:

  • Geo-political dominance through education and technological innovation
  • Creation and expansion of market niches (online/offline worlds)
  • Managing narratives, perceptions, and norms

While I’m interested in all three themes, I chose to focus on psycho-social technique to start: how knowledge is produced and disseminated to manage narratives, public perception, and development of norms. The P3 partnerships actively setting ethical frameworks for the currently self-regulated field of AI in education provide an example of the kind of industrial management that Stephen Ball writes about. In this case, management becomes a technology for morality, and thus a technology of power/knowledge. This kind of event can also be understood as a “micro‐technology of control” manifested as processes of self‐regulation (Ball, 1993) which emerge as patterns in the UK context, especially post-pandemic.

Post-pandemic pedagogies

A large number of observers continue to point out that COVID-19 has given business the booster to enter education. The move to integrate itself into education through (AI) technology has highlighted the necessity of conducting close analysis of Public Private Partnerships (P3s/PPPs) and the ways they are being used to respond to the educational challenges resulting from the pandemic (e.g. Microsoft, 2020; Google, 2020; Pearson, 2020). In their important work on P3s, Elaine Unterhalter and Jasmine Gideon (2021) ask questions worth sharing: Should we be concerned about the role being assigned to PPPs in the post-COVID landscape? What does this mean for existing inequalities?

The implications of these post-pandemic actions are addressed by many others. For instance, Neil Selwyn, Felicitas Macgilchrist, and Ben Williamson (2020) maintain that “Discussions about digital technology and education need to be focused on ensuring that the pandemic is not used as an excuse to push through further corporate reforms of public education” (p. 6). I am also of the view that the pandemic has been used as an opportunity to push for radical reform and commercialisation of education. That much is evident upon scrutiny of multi-scalar documents to make sense, first, of the dominant ideas about AI in education that in turn affect policy making and edtech regulation.

Fair-AIEd’s preliminary findings suggest that popular discourse frames AI in school as an inevitability and a necessity for optimised “data-driven” and “evidence-based” educational governance. This view of AI in school also rests on the assumption that no space in the human body is sacred enough to be protected from the stealth and creep of AI’s attention. In this social imaginary, every aspect of bare life is and should be thrown open for measurement and behavioural management viatimely nudges,” for example.

In this kind of datafied school system, student performance becomes driven by an infrastructure of technical methods that monitor bodies and minds, resulting in what Foucault (1977) would view as a “swarming of disciplinary mechanisms,” a method of shaping students into manageable bodies. Such automation of schooling can lead to educational governance being enacted in ways that reproduce and amplify forms of exclusion and discrimination, and assimilate difference and the less measurable ways of being-in-the-world into a totalising mono-structure of power/knowledge.

When examining multi-sector discourses around AIEd, it becomes clear that one vision of the #FutureofEducation is more contagious than others. With the help of social media exposure attached to claims of expertise, and self-identifying as working to protect disadvantaged populations from unethical business, one sample group of stakeholders has started to emerge as a representation of the thought leadership currently dominating popular discourse on AI, ethics, and education in the UK context.

Even at a preliminary level, it is clear to see that values attached to many AI codes of ethics at all levels, from local to global scales, are embedded in a Eurocentric philosophy that does not necessarily translate to global contexts and diverse interfaith traditions. For example, a database of more than 160 ethical frameworks from around the globe indicates that many of these existing guidelines have been developed by stakeholders from primarily North America and Europe (Jobin et al., 2019). In addition, only a handful of these guidelines comprise oversight or enforcement mechanisms. While there is a slow emergence of literature that offers AI ethics a grounding in diverse global philosophies and faiths, there remains a strong bias towards an established Western canon. There is also limited representation of what counts as a diverse global population involved in the norm-setting processes enacted in the UK context. A cause for alarm concerns inclusion and representation. With this in mind, I’m left with a question: Whose voices are being heard in and represented as AI ethics for all?

Nodeworthiness

I thought I’d quickly share with you one practical example of how I’ve been working through my discourse analysis. I’ll start with how I’ve been categorising nodes. I’m trying to work with social network analysis as a way into mapping the AI in education landscape. A fantastic project I worked on while at the LSE is what informed my approach to this WP1 horizon scanning task. While my grasp of social network analysis cannot compare to the innovative research done by the people I worked with on the Virt-EU project, I thought I would use some of the more basic techniques to generate maps that I could then use to identify patterns in language, emerging themes, and dominant ideas and values. This approach to discourse I find has been a useful way into making sense of dominant ideologies.

 

Figure 2: A live visualisation of this social network map specific to the nodes connected to the entities taking part in developing the Asilomar Principles, can be found at Graph Commons.

 

Informed by the Asilomar map (above) developed in the SEED project, my larger map of AI, ethics, and education is also starting its life resembling a small [eco]system populated by nodes (entities) and visible connections between nodes (Latour, 2005). Despite the busy-ness of this system, it is possible to identify and trace fields of influence across nodes, or across entities on the map. With closer attention to the discourses flowing through these connections, and then conducting an analysis by collocating text/talk, coding, and thematising, it is possible to draw conclusions about the circuits where power/knowledge are more deeply entrenched in the system – as a cultural media environment (Hall, 1973). I will identify Nodes from #1 – 6 for the sake of simplicity.

#Node 1 was identified by my map as significant given the number of outgoing and incoming connections to other nodes, which suggests exercise of influence in the field of action (AI, ethics, and education as field of action). Entities connected to this node can be traced through social networks as connecting to a number of industry nodes, ranging from edtech start-ups to multi-national corporations, nodes that stand to make significant gains from the integration of AI into education. To put things into perspective: through AI in education is a path to a global market share valued at US$1.1 billion in 2019, and expected to reach US$25.7 billion by 2030 (Globe Newswire, 2020).

I tagged #Node 1 with the following codes for further sorting, thematising, and analysis:

  1. #P3 / #PPP
  2. #assumed leadership
  3. #platform authority
  4. #populariser
  5. #commercialisation
  6. #privatisation
  7. #personalisation
  8. #conflicts of interest
  9. #coloniality of knowledge
  10. #power/knowledge creation

Some of the objectives of #Node 1 are found to repeat across connecting nodes. Claims of authority and expertise by #Node 1 are disseminated through #Node 2. #Node 1 assumed responsibilities are identified in language extracted from #Node 2, for example, as ownership of knowledge: developing and disseminating a system for the ethical governance of AI in education (UK); publication and dissemination of a Code of Ethics for design, development, and use of AI in education and training; producing frameworks for responsible design; building public trust in and public knowledge about AI in education; mandating ethics training for all involved in education and training; facilitating ethics training and developing protocols for the evaluation and approval of AI design and use.

It is clear from the excerpts extracted from #Node 2 (and other similar nodes) that #Node 1 is envisioned as having a significant influence on shaping and reforming edtech policy by asserting itself as a voice of authority and expertise through discourse first, expertly managing the direction of rhetoric about AI ethics in education. Also extracted from #Node 2 are excerpts that evoke Jungian archetypes of sage, hero, and ruler and attribute them to #Node 1, also presented in other nodes as engaged in a fight for the good of school and society, especially the most vulnerable. Alarm and urgency, effective rhetorical techniques, also emerge in the #Node 2 data and across other related text.

There are several other questions that arise when examining the language extracted from these nodes. First, it is not difficult to trace through connections between #Nodes 1 and 2 and outwards that dominant views and assumptions tend to skew towards self-regulation, commercialisation, and privatisation of education. In terms of assumptions underpinning these views, even the term AI in education assumes itself into existence as a fait accompli. #Node 1 directly, and indirectly through #Node 2 as a proxy, claims for itself the expertise, trustworthiness, and authority required to manage the operationalisation of AI in education at scale. In another closely connected node, #Node 3, data extracted reproduced the assumption that AI is both good and necessary for education, also identified across a number of nodes.

Upon further analysis of the text associated with these nodes/entities, ethical principles holding a central place in the discourse, e.g., fairness, transparency, privacy, and autonomy, are not sufficiently explained or provided with historical context and firm footing in a philosophical tradition. In addition, as flagged in public reports, promoted principles are not necessarily being enacted by individual industry nodes exhibiting strong social network connections to #Node 1. Some of these industry entities connect to other nodes through, for example, publicly available reports and Freedom of Information Requests (FOI) submitted to testbed schools and academies.

Language extracted from #Node 4 explicitly identifies #Node 3, for instance, as demonstrating insufficient regard for General Data Protection Regulations (GDPR) pertaining to mining student data in school for experimental research and product development. #Node 5 connects to FOIs indicating that schools agreeing to participate in the AI edtech experiments of #Node 3 were not qualified to understand the due diligence needed to evaluate the harms and risks of the AI product. Elsewhere, #Node 4 offers a summary of the AI process integral to product attached to #Node 3. Data retrieved from #Node 4 raises several concerns, including those around whether a product still in development, or using children’s data for ongoing or new product development, can do so lawfully other than for minor and non-substantive tasks or security enhancement; in particular where the school itself, not the company, makes the decision to use the product. When asked for comment about transparency and data protections specific to the AI product, the answer provided by the industry entity amounted to an eleven word response captured in #Node 4 data: “To clarify, we don’t use any private data in the AI.”

In addition, #Node 3 has emerged as a key voice in the shaping of norms and codes regarding AI, ethics, and education, defining its own field of influence as a process of broad and repeated disseminations of self across social media, supplemented with appeals to the authority of evidence and assuming field of action expertise (e.g., AI, ethics, and education). These claims of expertise and authority can be traced across a number of connections to other nodes in the form of public documents, including news sites, academic and educational tech blogs. #Nodes1-3, through text/talk, appear to share the same views and values as other closely connected nodes, a perspective that subscribes to the commercialisation of education. These values can be better understood by mapping their political economy (biopolitics) using concepts of commodification, structuration, and spatialisation (Gandy & Nemorin, 2018). NER connections from #Nodes 1-3 connect directly to #Node 6, an open letter published recently in the Sunday Times (2021).

The letter is an exemplar of how strategies of power/knowledge can be identified and traced through discourse. Written by a group of MPs, educationalists, and entrepreneurs, the letter calls for a royal commission on education to sweep away the “factory model” of teaching and learning currently taking place in schools. However, this new vision of AI-driven education also has the potential for deskilling of teachers, positioning students as nodes for data extraction and commercialisation, and increasing the digital divide. Language extracted from #Node 6 suggests that the pandemic is being viewed as a point of entry for the commercialisation of education:

The pandemic has shone a spotlight on the real difficulties that many educators and families have when it comes to benefitting from the use of technology — it is essential that ALL learners are given the vital learning lifeline that technology can provide.

Also included in the letter are rhetorical moves that aim to exert pressure through social media naming and shaming for the State to take action:

The shameful lack of engagement with innovation, data and AI by the Department for Education is depriving young people of the opportunity to a high-quality education that can weather the challenges of pandemics and school closures and restrictions. Worst of all, in this scenario those most in need are also those who miss out most.

This node connects back to data extracted from #Node 3 (and others), creating a sense of urgency and cause for alarm. Here, the future of innovation in education and training are at stake if we don’t act to integrate AI into education right away. Similar to related nodes, this one can be understood as an example of how entities acquire authority to shape the field of exercise: power/knowledge in action through discourse first which then leads to practice, including management of radical educational reform and regulation:

Clearly, the growth of AI and robotics will have a profound impact, so we need a special royal commission on education, AI and exam reform that would include experts and report within nine months.

Of nodeworthiness are claims emerging from a number of nodes that conflate AI and quality education without pausing first to ask what ‘quality’ education means. In addition, what is the purpose of AI in education? Where is the evidence to support current claims as a matter of transparency and rigour? What precisely is the educational problem that AI is here to fix? This letter, by virtue of names and entities reified within it, connects to a number of other nodes. Following some of these nodal leads have at times led to a dead end. Other times they have led to a gold mine.

A different vision

COVID-19 will leave a lasting impact on education and digital technology. As others have urged, “Despite the prevailing rhetoric from those who stand to gain most from such changes, this is not ‘business-as-usual’ or a ‘new normal’” (Selwyn et al., 2020, p. 2). The more schools become dependent on AI edtech systems for teaching and learning, management and communications, and when a particular threshold is reached in terms of the number of schools these systems are embedded in, then the purpose of education as a public good becomes largely if not entirely dependent on private companies (Defend Digital Me, 2021). That is worrisome. With steadily increasing calls for AI in education, what is cause for alarm is this:

  • Technologies of the self with power to construct epistemological ‘reality’ are being wired into the structures and processes of education;
  • Some of these technologies are not being vetted sufficiently to determine their full range of implications for school populations, and some organisations are not being transparent and compliant with their data collection, storage, and protection processes; and
  • It is impossible to protect the privacy and family rights of students without sufficient understanding of how AI technologies work for education, including their range  of implications.

There are many concerns around AI in education that need to be more widely discussed and debated before allowing this unknown quantity into the everyday lives of teachers and learners. I’ve left the following concern as the last point I raise before signing off from my update. As I continue to collect data, I keep returning to a disciplinary question related to meta-ethics, moral philosophy, and philosophy of education. Based on my reading of AIEd so far, these rigorous processes for critical inquiry must begin featuring more prominently in public debate about AI, ethics and education. For instance, town hall discussions that begin with developing an understanding of the concepts we are being nudged to accept as important principles, or making sense of what AI actually is before trying to understand what it does in and for education, and whether or not it is needed at all… whether or not it is meaningful. Specifically, how precisely will AI serve the needs of education as a public good? That is for starters.

I’d like to end this month’s update on a happy note and extend a warm welcome to Dr Hayford Ayerakwa, the newest addition to our Fair-AI project team. Hayford brings to the project a background in social and economic geography, with research interests focusing on technology and education. Hosted by the University of Cape Coast (UCC), he will be responsible for the day-to-day running of the project in Ghana. I look forward to our collaboration.

Till next time.

Selena

 


References

Ball. S.J. (1993). Education policy, power relations and teachers’ work, British Journal of Educational Studies, XXXI,2, 106–121.

Davies, H.C., Eynon, R. & Salveton, C. (2020). The mobilisation of AI in education: A Bourdieusean field analysis. Sociology, 1-22. DOI: 10.1177/0038038520967888 

Feenberg, A. (2017). Technosystem: The Social Life of Reason. Cambridge, MA: Harvard University Press.

Foucault, M. (1977). Discipline and Punish: The Birth of the Prison. New York: Pantheon Books.

Foucault, M. (1976). The Will to Knowledge: The History of Sexuality Volume 1, (trans. R Hurley, 1998). London, Penguin.

Gandy, O.H. Jr., & Nemorin, S. (2019). Toward a political economy of nudge: smart city variations, Information, Communication & Society, 22:14, 2112-2126, DOI: 10.1080/1369118X.2018.1477969

Latour, B. (2005). Reassembling the Social. Oxford: Oxford University Press.

Lemke, T. (2016). Foucault, Governmentality, and Critique. Oxon, OX: Routledge.

Selwyn, N., Macgilchrist, F. & Williamson, B. (2020). Digital education after Covid-19. TECHLASH, Issue #01. Retrieved: https://bit.ly/36qEWMc

UNESCO (2019). How can artificial intelligence enhance education? https://bit.ly/2Dx6U9M

Unterhalter , E. &, Gideon, J. (2021). (Eds.). Critical Reflections on Private Public Partnerships. London: Routledge.

* Although language around personalisation and fairness was evident across many documents, results from the initial search returned with 0 hits in a 5 word window. Because of this limitation, we conducted advanced searches and extended the context horizon of words from 5 to 20 words (from left to right).

** During my recent Intellectual Life presentation at the UCL Knowledge Lab, a colleague asked what the educational problem that AI is supposed to fix is. This question is worth further discussion.

Selena Nemorin

Selena Nemorin

Author

Dr Selena Nemorin is a UKRI Future Leaders Fellow and lecturer in sociology of digital technology at the University College London, Department of Culture, Communication and Media. Selena’s research focuses on critical theories of technology, surveillance studies, tech ethics, and youth and future media/technologies. Her past work includes research projects that have examined AI, IoT and ethics, the uses of new technologies in digital schools, educational equity and inclusion, as well as human rights policies and procedures in post-secondary institutions.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Recent Posts

Subscribe to Notifications

Send us your email details and we will let you know when a new post is published.