https://www.mdu.se/

mdu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Artificial intelligence and democratic legitimacy. The problem of publicity in public authority
University, Stockholm, Sweden.ORCID iD: 0000-0002-2983-4522
Jonas Hultin Rosenberg.
Institute for Futures Studies, Stockholm, Sweden.
2022 (English)In: AI & Society: The Journal of Human-Centred Systems and Machine Intelligence, ISSN 0951-5666, E-ISSN 1435-5655Article in journal (Refereed) Published
Abstract [en]

Machine learning algorithms (ML) are increasingly used to support decision-making in the exercise of public authority. Here,we argue that an important consideration has been overlooked in previous discussions: whether the use of ML underminesthe democratic legitimacy of public institutions. From the perspective of democratic legitimacy, it is not enough that MLcontributes to efficiency and accuracy in the exercise of public authority, which has so far been the focus in the scholarlyliterature engaging with these developments. According to one influential theory, exercises of administrative and judicialauthority are democratically legitimate if and only if administrative and judicial decisions serve the ends of the democratic lawmaker, are based on reasons that align with these ends and are accessible to the public. These requirements are not satisfiedby decisions determined through ML since such decisions are determined by statistical operations that are opaque in severalrespects. However, not all ML-based decision support systems pose the same risk, and we argue that a considered judgmenton the democratic legitimacy of ML in exercises of public authority need take the complexity of the issue into account. Thispaper outlines considerations that help guide the assessment of whether a ML undermines democratic legitimacy when usedto support public decisions. We argue that two main considerations are pertinent to such normative assessment. The first isthe extent to which ML is practiced as intended and the extent to which it replaces decisions that were previously accessibleand based on reasons. The second is that uses of ML in exercises of public authority should be embedded in an institutionalinfrastructure that secures reason giving and accessibility.

Place, publisher, year, edition, pages
2022.
National Category
Political Science (excluding Public Administration Studies and Globalisation Studies)
Identifiers
URN: urn:nbn:se:mdh:diva-65052DOI: 10.1007/s00146-022-01493-0ISI: 000819880400001Scopus ID: 2-s2.0-85133224056OAI: oai:DiVA.org:mdh-65052DiVA, id: diva2:1820073
Funder
Stockholm UniversityMarianne and Marcus Wallenberg Foundation, MMW 2019.0160Available from: 2023-12-15 Created: 2023-12-15 Last updated: 2024-04-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Beckman, Ludvig
In the same journal
AI & Society: The Journal of Human-Centred Systems and Machine Intelligence
Political Science (excluding Public Administration Studies and Globalisation Studies)

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 85 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf