mdh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Sharing Moral Responsibility with Robots: A Pragmatic Approach
Mälardalen University, School of Innovation, Design and Engineering.ORCID iD: 0000-0001-9881-400X
2008 (English)In: TENTH SCANDINAVIAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, vol 173, IOS Press, 2008, p. 165-168Conference paper, Published paper (Refereed)
Abstract [en]

Roboethics is a recently developed field of applied ethics which deals with the ethical aspects of technologies such as robots, ambient intelligence, direct neural interfaces and invasive nano-devices and intelligent soft bots. In this article we look specifically at the issue of (moral) responsibility in artificial intelligent systems. We argue for a pragmatic approach, where responsibility is seen as a social regulatory mechanism. We claim that having a system which takes care of certain tasks intelligently, learning from experience and making autonomous decisions gives us reasons to talk about a system (an artifact) as being "responsible" for a task. No doubt, technology is morally significant for humans, so the "responsibility for a task" with moral consequences could be seen as moral responsibility. Intelligent systems can be seen as parts of socio-technological systems with distributed responsibilities, where responsible (moral) agency is a matter of degree. Knowing that all possible abnormal conditions of a system operation can never be predicted, and no system can ever be tested for all possible situations of its use, the responsibility of a producer is to assure proper functioning of a system under reasonably foreseeable circumstances. Additional safety measures must however be in place in order to mitigate the consequences of an accident. The socio-technological system aimed at assuring a beneficial deployment of intelligent systems has several functional responsibility feedback loops which must function properly: the awareness and procedures for handling of risks and responsibilities on the side of designers, producers, implementers and maintenance personnel as well as the understanding of society at large of the values and dangers of intelligent technology. The basic precondition for developing of this socio-technological control system is education of engineers in ethics and keeping alive the democratic debate on the preferences about future society.

Place, publisher, year, edition, pages
IOS Press, 2008. p. 165-168
Series
Frontiers in Artificial Intelligence and Applications, ISSN 0922-6389
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:mdh:diva-7273ISI: 000273520700021ISBN: 978-1-58603-867-0 (print)OAI: oai:DiVA.org:mdh-7273DiVA, id: diva2:237283
Conference
10th Scandinavian Conference on Artificial Intelligence (SCAI 2008) Location: Stockholm, SWEDEN Date: 2008
Note

http://www.iospress.nl/loadtop/load.php?isbn=9781586038670

Available from: 2009-09-25 Created: 2009-09-25 Last updated: 2013-11-02Bibliographically approved

Open Access in DiVA

No full text in DiVA

Authority records BETA

Dodig-Crnkovic, Gordana

Search in DiVA

By author/editor
Dodig-Crnkovic, Gordana
By organisation
School of Innovation, Design and Engineering
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 128 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf