Dr Vassilis Galanos, 'The Social Construction of AI as Existential Risk: Stories of Expectations and Expertise'
From James Stewart
Week 4 AI Safety and AI Ethics - bridging cultures of existential and social risk.
This presentation undertakes a critical examination of the terms 'artificial intelligence' (AI) and 'existential risk' as expressions of linguistic socialisation. By revisiting the social construction of technology (SCOT) theory/method package and incorporating insights from the sociology of expectations and expertise, I aim to illuminate the sociological implications and historical underpinnings of framing AI as an existential threat. First, I am emphasising the interpretative flexibility and the influence of relevant social groups that shape the contours of existential risks and AI. My analysis identifies the multiple interpretations surrounding AI and existential risks, highlighting the contestation by various actors to establish a dominant credible narrative. This contestation is prevalent among social groups situated along the spectrum from 'singularitarians' to 'job-displacementarians'—each with varying levels of AI expertise and experience. Prestigious figures such as Stephen Hawking and Elon Musk, are spotlighted for popularising doomsday scenarios despite little to no AI programming knowledge, influencing AI policy discourse between 2014 and 2018. AI-focused social groups are bifurcated into symbolic AI and connectionist expertise domains. Symbolic AI communities, wary of past 'AI winters' and the resultant funding stagnations, have often critiqued the AI-as-existential risk narratives and distanced themselves from connectionism, branded by them as advanced statistics. Conversely, connectionists—or machine learning/neural network specialists—had previously distanced themselves from the AI label due to internal disputes and fears of unrealistic expectations. However, connectionism has garnered renewed relevance due to significant achievements enabled by internet technologies, leading to a resurgent identification with AI. Secondly, the presentation discusses the concept of tentatively reaching technical closure by redefinition of the terminology and the problems, and the role of regulatory selectors, with an examination of Geoffrey Hinton's influential pivot back to embracing the AI label and his role in media discourse as an existential risk commentator. This move is posited to be instrumental in a Kuhnian paradigm shift in AI, particularly as connectionism becomes synonymous with AI and plays a crucial role in shaping the current AI Act, with its regulation intimately linked to various perceived risks. This endeavor not only traces the lineage and rivalry within AI theoretical traditions but also scrutinises the reciprocal influence of socio-political forces and technological discourse. Through this analysis, the presentation aims to surface the tacit underpinnings that navigate the complex landscape of AI as both a technical discipline and a social construct. Thirdly, taking into consideration criticisms of SCOT, I am opening up an agenda for avoiding political quietism by actively looking for invisible actors, marginalised during the social shaping of AI.
Dr Vassilis Galanos
(ve/vem) is Research Associate and Teaching Fellow at the Edinburgh
College of Art, conducting a risk-based assessment of Generative AI in
journalism as part of the BRAID UK initiative and teaching about
Technology Futures. Vassilis
investigates the historical and sociological underpinnings of
interwoven AI and internet technologies, and how expertise and
expectations are negotiated in these domains. Ve is Associate Editor of
Technology Analysis and Strategic Management and prospective Lecturer in
Digital Work at the University of Stirling's School of Management. Ve
is vegetarian and has jammed with Sun Ra Arkestra in the middle of