The browser you are using is not supported by this website. All versions of Internet Explorer are no longer supported, either by us or Microsoft (read more here: https://www.microsoft.com/en-us/microsoft-365/windows/end-of-ie-support).

Please use a modern browser to fully experience our website, such as the newest versions of Edge, Chrome, Firefox or Safari etc.

When is any agent a moral agent?: reflections on machine consciousness and moral agency

Author

  • Joel Parthemore
  • Blay Whitby

Summary, in English

In this paper, we take moral agency to be that context in which a particular agent can, appropriately, be held responsible for her actions and their consequences. In order to understand moral agency, we will discuss what it would take for an artifact to be a moral agent. For reasons that will become clear over the course of the paper, we take the artifactual question to be a useful way into discussion but ultimately misleading. We set out a number of conceptual preconditions for being a moral agent and then outline how one should – and should not – go about attributing moral agency. In place of a litmus test for such agency – such as Colin Allen et al ’s Moral Turing Test – we suggest some tools from conceptual spaces theory and the unified conceptual space theory for mapping out the nature and extent of that agency.

Publishing year

2013

Language

English

Pages

105-129

Publication/Series

International Journal of Machine Consciousness

Volume

5

Issue

1

Document type

Journal article

Publisher

World Scientific Publishing

Topic

  • Languages and Literature

Keywords

  • moral agency
  • Moral Turing Test
  • self
  • akrasia
  • concepts
  • conceptual spaces

Status

Published

Project

  • Centre for Cognitive Semiotics (RJ)

ISBN/ISSN/Other

  • ISSN: 1793-8430