Parallel session 4 (26th May): Digital Education Governance Beyond International Comparative Assessment
From Claire Sowton
views
comments
Related Media
*Please note: captions are still to be reviewed. We anticipate this work being complete by Aug 22*
Chair: Niels Kerssens
Participatory Processes for Governance in AIED
Ana Mouta, Eva María Torrecilla-Sánchez and Ana María Pinto-Llorente (University of Salamanca)
This paper is a result from a semi-systemic literature review and presents the preoccupations underpinning the (mostly) participatory processes through which a global governance for the use of Artificial Intelligence in Education (AIED) is being sought. Although work on trustworthy Artificial Intelligence (AI) was gaining traction, by 2019 there were still no regulations specifically for AIED (Holmes et al., 2019). But for algorithms to play a role sustaining democracy and education, AI must be understood as a socio-technical layer (Dignum, 2021) and Dewey’s notions of the ethics of moral principles and methods of deliberation are crucial in the educational field (DeFalco and Hampton, 2020). If we consider the Digital Divide, AI has widen the gap between developed and developing countries, socio-economic groups within countries, those having jobs enhanced by AI against those who may be replaced by it, owners and users of technologies (Miao et al., 2021). In fact, the divide is now beyond users, as it encompasses a divide between users and corporations (Abboud et al., 2020). Given this mindset, some AIED frameworks and its guiding principles will be presented, namely the “UNESCO's Rights, Openness, Access and Multi-stakeholder Governance”, the “Beijing Consensus on Artificial Intelligence and Education” (designed by over 50 government ministers, international representatives from over 105 Member States and almost 100 representatives from UN agencies, academic institutions, civil society and private sector) and “The Ethical Framework for Al in Education” from the “Institute for Ethical Al in Education”, resulting from a collection of interviews to policymakers, academics, philosophers and ethicists, industry experts, and educators.
Chair: Niels Kerssens
Participatory Processes for Governance in AIED
Ana Mouta, Eva María Torrecilla-Sánchez and Ana María Pinto-Llorente (University of Salamanca)
This paper is a result from a semi-systemic literature review and presents the preoccupations underpinning the (mostly) participatory processes through which a global governance for the use of Artificial Intelligence in Education (AIED) is being sought. Although work on trustworthy Artificial Intelligence (AI) was gaining traction, by 2019 there were still no regulations specifically for AIED (Holmes et al., 2019). But for algorithms to play a role sustaining democracy and education, AI must be understood as a socio-technical layer (Dignum, 2021) and Dewey’s notions of the ethics of moral principles and methods of deliberation are crucial in the educational field (DeFalco and Hampton, 2020). If we consider the Digital Divide, AI has widen the gap between developed and developing countries, socio-economic groups within countries, those having jobs enhanced by AI against those who may be replaced by it, owners and users of technologies (Miao et al., 2021). In fact, the divide is now beyond users, as it encompasses a divide between users and corporations (Abboud et al., 2020). Given this mindset, some AIED frameworks and its guiding principles will be presented, namely the “UNESCO's Rights, Openness, Access and Multi-stakeholder Governance”, the “Beijing Consensus on Artificial Intelligence and Education” (designed by over 50 government ministers, international representatives from over 105 Member States and almost 100 representatives from UN agencies, academic institutions, civil society and private sector) and “The Ethical Framework for Al in Education” from the “Institute for Ethical Al in Education”, resulting from a collection of interviews to policymakers, academics, philosophers and ethicists, industry experts, and educators.
Three Trajectories through Public-Private Governance Relations and Shifting Sectoral Dynamics in the Development of AI in China
Jeremy Knox (University of Edinburgh)
This talk will examine the impact of the recent ‘double reduction’ (双减) policy in China, which imposed stringent regulations on the private education sector in July 2021. The particular focus of discussion will be the impact of this policy on the development of AI for education in China, which, previous to the recent regulations, was driven largely by the private education sector. Alongside this policy analysis, the recent trajectories of three Chinese companies developing AI for education will be discussed, as ways of understanding the shifting sectoral dynamics in China, and the ways data-driven technologies are becoming established around particular kinds of public-private relations. Firstly, ‘New Oriental’, a well-established and prominent private education company that has shifted from a central player in the development of AI to, somewhat bizarrely, ‘an online marketplace for agriculture products’ (Wu, 2021). Secondly, ‘Tomorrow Advancing Life’ (or TAL), a well-known private education company that has become a ‘national AI champion’ for ‘smart education’ (Wernberg-Tougaard, 2021), thus taking on a powerful role in the governance of educational technology development. And finally, ‘Squirrel AI’, perhaps the best known educational AI company outside China, that have recently implemented substantial shifts in their business by developing new AI hardware and attempting to ‘service’ more public schools. In conclusion, drawing on the apparent fortunes of these three companies as they have navigated recent government policy in China, this talk will suggest broader implications for the regulation of AI in education internationally.
Algorithmic Impact Assessments in Education: The Potential of Collaborative Governance and Technical Democracy
Jeremy Knox (University of Edinburgh)
This talk will examine the impact of the recent ‘double reduction’ (双减) policy in China, which imposed stringent regulations on the private education sector in July 2021. The particular focus of discussion will be the impact of this policy on the development of AI for education in China, which, previous to the recent regulations, was driven largely by the private education sector. Alongside this policy analysis, the recent trajectories of three Chinese companies developing AI for education will be discussed, as ways of understanding the shifting sectoral dynamics in China, and the ways data-driven technologies are becoming established around particular kinds of public-private relations. Firstly, ‘New Oriental’, a well-established and prominent private education company that has shifted from a central player in the development of AI to, somewhat bizarrely, ‘an online marketplace for agriculture products’ (Wu, 2021). Secondly, ‘Tomorrow Advancing Life’ (or TAL), a well-known private education company that has become a ‘national AI champion’ for ‘smart education’ (Wernberg-Tougaard, 2021), thus taking on a powerful role in the governance of educational technology development. And finally, ‘Squirrel AI’, perhaps the best known educational AI company outside China, that have recently implemented substantial shifts in their business by developing new AI hardware and attempting to ‘service’ more public schools. In conclusion, drawing on the apparent fortunes of these three companies as they have navigated recent government policy in China, this talk will suggest broader implications for the regulation of AI in education internationally.
Algorithmic Impact Assessments in Education: The Potential of Collaborative Governance and Technical Democracy
Teresa Swist (University of Sydney), Kalervo N. Gulson (University of Sydney) and Jason Schultz (New York University)
There is a growing focus on the use of algorithmic systems in education, especially on proprietary systems used in K-12 systems and schools (Calo and Citron, 2021; Gulson & Witzenberger, 202l; Williamson, 2019). Accompanying the increased use of algorithm systems are calls for ‘core agencies’ such as education to no longer use opaque or ‘black-box’ automated systems (Campalo, Sanfilippo, Whittaker, & Crawford, 2017), with the European Union identifying automated systems in education as ‘high-risk’ activities (European Commission, 2021).This presentation focuses on the increased use of algorithmic systems in education, differentiated impacts on diverse stakeholders, and the need for innovative approaches to support collaborative governance. Specifically, we examine the potential of Algorithmic Impact Assessments (AIAs) to assess the uses and effects of algorithmic systems in education. While there are emerging ethical mechanisms in the sector to assess algorithmic impacts (Holmes et al., 2021), the potential of co-creating education-specific AIAs is underexplored. AIAs are increasingly recognised as a valuable tool for policymakers and practitioners to assess possible societal impacts both before a system is in use, as well as after application (Ada Lovelace Institute, 2020). We examine the challenges and opportunities associated with AIAs as an innovative approach to include multiple stakeholders in both design of, and application of, algorithmic systems in education. To inform future digital education governance theory and research, we also consider how this collaborative process potentially aligns with the concept of ‘technical democracy’ (Callon, Lascoumes, & Barthe, 2011) focused upon collective learning and experimentation to address socio-technical controversies.
There is a growing focus on the use of algorithmic systems in education, especially on proprietary systems used in K-12 systems and schools (Calo and Citron, 2021; Gulson & Witzenberger, 202l; Williamson, 2019). Accompanying the increased use of algorithm systems are calls for ‘core agencies’ such as education to no longer use opaque or ‘black-box’ automated systems (Campalo, Sanfilippo, Whittaker, & Crawford, 2017), with the European Union identifying automated systems in education as ‘high-risk’ activities (European Commission, 2021).This presentation focuses on the increased use of algorithmic systems in education, differentiated impacts on diverse stakeholders, and the need for innovative approaches to support collaborative governance. Specifically, we examine the potential of Algorithmic Impact Assessments (AIAs) to assess the uses and effects of algorithmic systems in education. While there are emerging ethical mechanisms in the sector to assess algorithmic impacts (Holmes et al., 2021), the potential of co-creating education-specific AIAs is underexplored. AIAs are increasingly recognised as a valuable tool for policymakers and practitioners to assess possible societal impacts both before a system is in use, as well as after application (Ada Lovelace Institute, 2020). We examine the challenges and opportunities associated with AIAs as an innovative approach to include multiple stakeholders in both design of, and application of, algorithmic systems in education. To inform future digital education governance theory and research, we also consider how this collaborative process potentially aligns with the concept of ‘technical democracy’ (Callon, Lascoumes, & Barthe, 2011) focused upon collective learning and experimentation to address socio-technical controversies.
- Tags
-