AIML2020: Artificial Intelligence and Moral Learning

Country: United Kingdom

City: London

Abstr. due: 10.01.2020

Dates: 06.04.20 — 09.04.20

Area Of Sciences: Pedagogy;

Organizing comittee e-mail:

Organizers: St Mary’s University, Twickenham, London and others


We welcome submissions on the topic of artificial and moral learning, broadly construed. These may include, but are certainly not limited to:

    How can we translate what we know about human moral learning into a machine learning problem?

    What are some principles that can ensure that AI systems are accountable to people?

    How can we make AI systems sufficiently morally generalizable (i.e. have robust behavior in novel ethical situations)?

        Specifically,  given our current awareness of adversarial inputs, what directions can we pursue to ensure reliability of moral AI systems in adversarial situations?

    How can we extend to the moral landscape current efforts in making machine learning systems’ behavior intelligible to humans (e.g. visualization of image recognition neural network layers, saliency maps)?

    In different moral frameworks, there are different conceptions of moral agency. What directions does interpretation of AI systems as moral agents point to regarding the development of moral learning?

    Reinforcement learning seems like a promising venue for moral learning. What bottlenecks exist and how can they be overcome in this approach?

    Can an AI systems develop character virtues? What would that look like?

    How can we develop systems that can explore their own space of uncertainty and generate “questions” that can be answered by a human “moral trainer”?

    Is it possible and what would it mean to create AI systems that are morally superior to humans?


Conference Web-Site:

Similar conferences with close deadlines: