logo
Populate the side area with widgets, images, navigation links and whatever else comes to your mind.
18 Northumberland Avenue, London, UK
(+44) 871.075.0336
ouroffice@vangard.com
Follow us

Follow Us Sophia, Belgisch Netwerk voor Genderstudies

+32 (0)2 229 38 69

Middaglijnstraat 10, 1210 Brussels

Nieuws

The ethics and social consequences of AI and caring robots. Learning trust, empathy and accountability

Our fascination with robots is old. So are our misgivings. Science fiction has warned us of the day they will take over for more than a century. Social theorists have long been predicting the consequences of robots’ entrance into the workplace. Luddites have been warning of their impact on our lives and our relationships. And more nuanced examinations have probed the way we think of ourselves when we think of (and with) them. Yet, for many of us, robots in that stereotypical, personified form, as a unit we interact/intra-act with on an emotional level, have stayed in the realm of science fiction. We may have a robotic vacuum cleaner at home. We may have even given that vacuum cleaner a name. But an autonomous housekeeper robot who is part of the family (à la Rosie the robot maid in The Jetsons)? Not yet.

This is about to change. Robots are starting to enter our daily life. We and our children are going to be expected to interact with robots as they perform different kinds of care for and with us at different life stages. What will that do to how we – and how the robots – think of care? And how are we going to produce accountability, trust and empathy in the relational intra-actions we have, together?

Questions like these about the ethical, economic, social, legal and labour market aspects that may be entailed by the ongoing technological shift in society are at the heart of the WASP-HS programme.

We are currently accepting applications to 3 fully funded, 4 year PhD positions associated with this research (deadline 30 January 2020):

Position 1: Designing care robots

What bodies are assumed in the design of companion robots, and how does the design of the robot affect its interactions with humans? This project focuses on how care and affect are materialised in the body of the companion robot, with particular critical attention to intersections of gender, ethnicity and ability. An additional area of inquiry could examine how the material design features of the robot’s body are mediated through affective programming software to produce a more intimate encounter.

Position 2: Learning data for companion robots

How can robots learn to care when collecting data on relevant humans may be limited for ethical reasons? Or if real data contain bias, on which data should you train your data? Generative machine learning techniques (such as generative adversarial networks (GANs)) offer a solution to problems with “real” data such as scarce availability, labour intensity of data labelling, data biases, or privacy intrusiveness. This project comprises a critical inquiry into the production/collection of data sets used to help companion robots learn, and particularly the possibility of using GANs to assist with this.

Position 3: The affective space between human and companion robots

Current advances in robotics often focuses on refining robots to learn about and respond better to humans. However, interacting well with a robot also requires significant learning on the part of the human participant. This project focuses on the affective space between human and robot, and the work that both participants must learn to do to create an emotional relation characterised by care and trust.

Interested? Please contact us with any questions Ericka Johnson and Katherine Harrison or apply here.