Keynote A abstract
Generative AI's Gappiness: Meaningfulness, Authorship, & the Credit-Blame Asymmetry
Sven Nyholm
Professor of Ethics of Artificial Intelligence
LMU Munich
11.00–12.30, 15 Dec 2023
Abstract: When generative AI technologies generate novel texts, images, or music in response to prompts or instructions from human users of these technologies, are the resulting outputs meaningful in all the ways in which human-created texts, images, or music can be meaningful? Moreover, who exactly should be considered as the author – or who are the authors – of these AI outputs? Are texts created by large language model-based generative AI perhaps best considered as authorless texts? Would that affect their meaning? In my presentation, I will relate the above-mentioned questions to the topic of who (if anyone) can take credit for, or potentially be blameworthy for, outputs generated with the help of large language models and other generative AI technologies. I will argue that there is an important asymmetry with respect to how easily people can be praiseworthy or blameworthy for outputs that they create with the help of generative AI technologies: in general, it is much harder to be praiseworthy for impressive outputs of generative AI than it is to be blameworthy for bad or harmful outputs that we may produce with the help of generative AI. This has significant implications for the issues of meaning and authorship. Generative AI technologies, I shall argue, are in important ways “gappy”. That is, they create various gaps with respect to key aspects of meaning and authorship, as well as with respect to responsibility for their outputs. In order to fill these gaps, we need to come up with new ideas and new norms concerning what counts as meaningful texts, authorship, and responsibility (both credit and blame) in relation to AI-generated outputs such as text, images, or music.