ログイン
Language:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 学術雑誌論文
  2. 5 技術(工学)

Large-scale moral machine experiment on large language models

http://hdl.handle.net/10228/0002001684
http://hdl.handle.net/10228/0002001684
834ca099-3abd-4fff-b5a0-8fa4937bdbf6
名前 / ファイル ライセンス アクション
10451321.pdf 10451321.pdf (7.7 MB)
アイテムタイプ 共通アイテムタイプ(1)
公開日 2025-05-23
タイトル
タイトル Large-scale moral machine experiment on large language models
言語 en
著者 Ahmad, Muhammad Shahrul Zaim bin

× Ahmad, Muhammad Shahrul Zaim bin

en Ahmad, Muhammad Shahrul Zaim bin

Search repository
竹本, 和広

× 竹本, 和広

WEKO 24877
e-Rad_Researcher 40512356
Scopus著者ID 35270356700
ORCiD 0000-0002-6355-1366
九工大研究者情報 100000509

en Takemoto, Kazuhiro

ja 竹本, 和広

Search repository
著作権関連情報
権利情報Resource https://creativecommons.org/licenses/by/4.0/
権利情報 Copyright (c) 2025 Ahmad, Takemoto. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
言語 en
抄録
内容記述タイプ Abstract
内容記述 The rapid advancement of Large Language Models (LLMs) and their potential integration into autonomous driving systems necessitates understanding their moral decision-making capabilities. While our previous study examined four prominent LLMs using the Moral Machine experimental framework, the dynamic landscape of LLM development demands a more comprehensive analysis. Here, we evaluate moral judgments across 52 different LLMs, including multiple versions of proprietary models (GPT, Claude, Gemini) and open-source alternatives (Llama, Gemma), to assess their alignment with human moral preferences in autonomous driving scenarios. Using a conjoint analysis framework, we evaluated how closely LLM responses aligned with human preferences in ethical dilemmas and examined the effects of model size, updates, and architecture. Results showed that proprietary models and open-source models exceeding 10 billion parameters demonstrated relatively close alignment with human judgments, with a significant negative correlation between model size and distance from human judgments in open-source models. However, model updates did not consistently improve alignment with human preferences, and many LLMs showed excessive emphasis on specific ethical principles. These findings suggest that while increasing model size may naturally lead to more human-like moral judgments, practical implementation in autonomous driving systems requires careful consideration of the trade-off between judgment quality and computational efficiency. Our comprehensive analysis provides crucial insights for the ethical design of autonomous systems and highlights the importance of considering cultural contexts in AI moral decision-making.
言語 en
書誌情報 en : PLoS ONE

巻 20, 号 5, p. e0322776-1-e0322776-20, 発行日 2025-05-21
出版社
出版者 Public Library of Science
言語 en
言語
言語 eng
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_6501
資源タイプ journal article
出版タイプ
出版タイプ VoR
出版タイプResource http://purl.org/coar/version/c_970fb48d4fbd8a85
DOI
識別子タイプ DOI
関連識別子 https://doi.org/10.1371/journal.pone.0322776
ISSN
収録物識別子タイプ EISSN
収録物識別子 1932-6203
査読の有無
値 yes
研究者情報
URL https://hyokadb02.jimu.kyutech.ac.jp/html/100000509_ja.html
論文ID(連携)
値 10451321
連携ID
値 14491
戻る
0
views
See details
Views

Versions

Ver.1 2025-05-23 12:00:06.691943
Show All versions

Share

Share
tweet

Cite as

Other

print

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR 2.0
  • OAI-PMH JPCOAR 1.0
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX
  • ZIP

コミュニティ

確認

確認

確認


Powered by WEKO3


Powered by WEKO3