Posters II Abstracts

Michael Cannon       

The Arc and the Circle: Cognitivist and Post-Cognitivist Kinds of Intelligence           

N/A

 

Kanyu Wang

Uncertainty, awareness, and why now is not the right time to prioritise AI existential risk

There are two main approaches to AI risks: to prioritise AI-related risks which we understand better or to prioritise AI existential risk (Bostrom 2014; Müller, 2016; UK Parliament 2023). The case for the latter is straightforward:

P1: Whether a risk should be prioritised now depends on whether its stake is higher than any other risk.

P2: The stake of AI existential risk is higher than any other risk.

C: We should prioritise AI existential risk now.

P2 is hard to deny but P1 is questionable. I argue instead that whether a risk should be prioritised now depends instead on whether its rationally discounted stake is higher than any other risk (P1*) and that the stake of AI existential risk rationally ought to be severely discounted (P2*) because everything about AI existential risk is insofar highly uncertain and there are lots of possibilities of which we are unaware (Vold and Harris, 2021; Müller and Cannon 2021). So, we should not prioritise AI existential risk now.

Uncertainty aversion can be usually rational (Ellsberg 1961; Fleurbaey 2018; Stefásson and Bradley 2019). Imagine that there is a fair coin and an urn. In the urn, there are some black balls and some white balls but you do not know the numbers. You can choose between two bets. If you choose Bet 1, you will win £100 if the coin lands head and £0 if tail. If you choose Bet 2, you will win £100 if you draw a black ball from the urn and £0 if white. Most people would strictly prefer Bet 1 to Bet 2 which seems rational. This means that most people rationally discount the stake of Urn because of uncertainty.

Dealing with unawareness rationally is infamously hard (Karni and Viero 2017; Bradley 2017; Mahtani 2021; Steele and Stefásson 2021). Nevertheless, unawareness aversion (a term coined by me, see Wang ms.) can be rational. Imagine that in addition to Urn, there is also Urn* in which there may be black balls and white balls but may also be balls of other colours which, if exist, correspond to other payoffs. You now need to choose between Bet 2 and Bet 3. Bet 2 is still to bet on Urn. Bet 3 is to bet on Urn*. Most people would strictly prefer Bet 2 to Bet 3 which seems rational. This means that most people rationally discount the stake of Urn because of unawareness. You know that you do not know something. To be rational, you should be responsive to what you know including your knowledge that there are possibilities which you do not know.

We do not know what we do not know about AI existential risks but we know, or at least feel reasonably affirmative, that there are lots of things we do not know about AI existential risks whereas we know more about some other AI-related risks. A choice between prioritising AI-related risks which we understand better and prioritising AI existential risk is thus analogous to a choice between betting on a fair coin and betting on Urn* and We rationally ought to bet on the fair coin until we know meaningfully more about the balls in the urn.

For example, suppose that we need to choose between prioritising AI fraud risk and prioritising AI takeover risk. We are 0.5 confident that our intervention can be useful in terms of mitigating AI fraud risk if we choose to prioritise this risk and we estimate that if our intervention is useful in the case of AI fraud, total human wellbeing will improve by 100 units. We are uncertain about how high our credence is or should be when it comes to the usefulness of our intervention in terms of mitigating AI takeover risk if we choose to prioritise this risk although we estimate that if our intervention is useful in the case of AI takeover, total human wellbeing will improve by 100,000 units. Moreover, we know that we do not know how any AI takeover can happen, how such a world will be like, what humans will think if it happens, what options of intervention we will have, and how any intervention can work. Whether the stake of AI takeover, in this example, can be so severely discounted that it is no longer rational for use to prioritise AI takeover risks is a matter of degree and we will in any case need more input from machine learning experts. But the basic lesson is clear: it is no longer a no-brainer that we should prioritise AI takeover even if it is realistically possible and the stake is much higher.

 

Markus Rüther        

Why care about sustainable AI? Some thoughts from the debate on meaning in life     

There is little doubt that Artificial Intelligence (AI) is transforming, and will continue to transform, the world. However, the potential for positive change that AI brings also carries the risk of negative impacts on society. The recent surge in AI, fueled by ever-increasing amounts of data and computing power, has given birth to the field of AI ethics — the study of ethical and societal issues facing developers, producers, consumers, citizens, policymakers, and civil society organizations. Literature on AI is rich and has identified several ethical themes, ranging from the broad threat that AI technologies might develop superintelligence to more tangible issues of explainability, privacy, and data biases.

More recently, a group of ethicists has linked AI ethics to another subject, which some describe as a new wave of AI ethics: the field's own sustainability and environmental impact ([1], [2] are standing in for many others). Numerous reasons have been put forward for this expansion. Primarily, these reasons pertain to the well-being of individuals or the moral duties we have towards other entities. A sustainable environment, it is argued, is something we owe to ourselves, our children, future generations, the environment, or the planet as a whole. These reasons are, of course, comprehensive ones to justify our commitment to sustainability. But are they the only path to justification? Aren't there also other possibilities?

In this talk, I am guided by the assumption that there are indeed other relevant aspects that we can take into account if we want to justify our commitment to sustainability. More specifically, I attempt to defend the thesis that there are reasons that lie beyond self-interest and moral obligations. But how might these reasons be characterized? To discern this, I believe it is worthwhile to turn to a debate in normative ethics, specifically to the so-called "meaning in life" debate that has been flourishing over the last 15 to 20 years in analytic ethics, and which already has some bearing in AI ethics (more general [3], and for the application in AI ethics [4]). In this debate, one of the essential assumptions is that reasons for meaning are an independent factor in our ethical considerations that cannot be equated with reasons of well-being and morality. Following this debate, my working hypothesis is that these reasons for meaning can also be influential when applied to the AI ethics debate, particularly regarding sustainable AI.

Given these assumptions, one can identify at least three arguments which can be advanced to connect meaningfulness and sustainable AI. I offer a more detailed examination of them in the extended version of the paper (see above for my comment on the background of this talk). Here, I only mention the basic elements of the arguments and my conclusions:

1. The Meaning-conferring-action Argument posits that caring for sustainable AI is reasonable because caring is a meaningful action. I argue that while it is plausible to assume that caring confers meaning, the argument does not deliver universal pro tanto reasons. This is due to relative factors such as circumstances and personality traits, which suggest that caring for sustainable AI might not be a reason for everyone in every situation.

2. The Afterlife Argument, heavily inspired by Samuel Scheffler's “afterlife conjecture”, suggests that there are reasons to care about sustainable AI because the afterlife, understood as the future lives of others after our death, adds meaning to our current life. I contend that while the argument can potentially build on solid intuitions that are hard to overlook, it needs some theoretical refinement to be conclusive.

3. The Harm Argument is predicated on the rationale that inflicting harm on future generations leads not only to a moral loss but also to a loss of meaning in one's life. I argue that this argument has intuitive appeal. However, I also want to highlight some presuppositions of the argument that need further elaboration. The argument is underdeveloped in terms of the “currency” and “measure” of harm, the role and relevance of omissions in causing harm, the place of harm in a theory of meaning, and the assumption that negative meaning or anti-meaning exists. All things considered, however, I argue that the Harm Argument is the best approach so far if one is in search of universal pro tanto reasons for caring about sustainable AI. Therefore, if one intends to explore the “Why care?” question further, it would be promising for future work to start from here.

[1] van Whynsberghe, A. 2021. „Sustainable AI: AI for Sustainability and the Sustainability of AI.“ AI and Ethics, 1-6.

[2] Brevini, B. 2020. „Black Boxes, Not Green: Mythologizing Artificial Intelligence and Omitting the Environment.“ Big Data & Society 7(2).

[3] Metz, Thaddeus, "The Meaning of Life", The Stanford Encyclopedia of Philosophy (Fall 2023 Edition), Edward N. Zalta & Uri Nodelman (eds.), forthcoming URL = <https://plato.stanford.edu/archives/fall2023/entries/life-meaning/>.

[4] Nyholm, S., Rüther, M. Meaning in Life in AI Ethics—Some Trends and Perspectives. Philos. Technol. 36, 20 (2023). https://doi.org/10.1007/s13347-023-00620-z

 

Renee Ye

Ameliorating Anthropocentrism: New Directions for Artificial Consciousness

For a considerable period, the distribution question 'Which entities besides humans are conscious?' and the definitive question 'Is X conscious?' have been central to the scientific and philosophical discourse on consciousness. I argue that these questions are ultimately misleading. Instead, I propose that we prioritize the relative question — 'How is X conscious with respect to human consciousness?' — as it offers a more fruitful avenue of inquiry. I advocate for a shift in our approach to consciousness research by focusing on answering the relative question. The relative question offers a more comprehensive and unbiased approach to AI consciousness research.

Our understanding of the world, including consciousness, is inherently centered around the human experience. The human cognitive framework biases our perception and comprehension of the world towards a human point-of-view. Anthropocentrism is ubiquitous to consciousness research because research is conducted by humans who formulate hypotheses, design experiments, and analyse data based on their own perspectives and tools.

While often considered harmful, I argue that anthropocentrism can serve as a benign and necessary feature of comparative consciousness research. To achieve this, I present a nuanced taxonomy of anthropocentrism. I distinguish between Pernicious Anthropocentrism and Benign Anthropocentrism; I suggest that researchers exercise extreme caution to avoid falling into the pernicious type and instead aim to maximize the application of benign anthropocentrism. First, there is ‘pernicious anthropocentrism’:

Pernicious Anthropocentrism: When a researcher is being perniciously anthropocentric, she uses human consciousness to identify very specific features of conscious experience (often only realized in humans) in order to define consciousness in general.

Pernicious anthropocentrism colours our epistemic access to other forms of consciousness in a distinctly human light and inherently neglects evidence for or against consciousness that is not human-like, especially artificial consciousness. Within pernicious anthropocentrism, there are two subtypes:

Explicit Pernicious Anthropocentrism: A researcher is explicitly deploying pernicious anthropocentrism when she deliberately endorses the false presupposition that human consciousness is the gold standard for consciousness or, at least, the most expedient model.

Implicit Pernicious Anthropocentrism: A researcher is implicitly deploying pernicious anthropocentrism when she non-consciously prioritises features of human consciousness in consciousness research, for example, those features based on biological and brain-based models.

With explicit pernicious anthropocentrism, researchers actively disregard the possibility of other forms of consciousness and fail to acknowledge the limitations and biases inherent in using human consciousness as the sole reference point. With implicit pernicious anthropocentrism, however, researchers simply give preference to features of human consciousness (Miller 2005; Signorelli et al. 2021) in consciousness research, a preference which is not justified. Little contemporary work in comparative consciousness falls into the category of explicit pernicious anthropocentrism. However, implicit pernicious anthropocentrism appears to persist in dominant approaches to consciousness, such as the biological approach (Godfrey-Smith 2016; Thompson 2010), the marker approach (Tye 2017; Shriver 2017), and the theory approach (Andrews 2020; Birch 2020).

Explicit and implicit pernicious anthropocentrism hinder artificial consciousness research. They lead to biased research by steering investigations towards replicating or mimicking human-like consciousness in AI systems, overlooking unique AI-specific experiences. Moreover, they neglect AI-specific phenomena by denying that AI systems may have their own unique forms of consciousness, which may not align with human consciousness. When they persist, researchers may neglect to investigate AI-specific phenomena, limiting the potential for breakthroughs in understanding artificial consciousness.

I propose to distinguish yet another kind of anthropocentrism, 'benign anthropocentrism':

Benign Anthropocentrism: A researcher can be said to be benignly anthropocentric when their starting point for investigating consciousness is a rich cognitive and behavioural account of human cognition, which is then used to partially guide investigation into human and non-human conscious experiences. Importantly, human consciousness is not used to define consciousness as such, rather, it gives us an open-ended framework that allows us to understand human and non-human consciousness on their own terms.

Benign anthropocentrism is importantly different from pernicious anthropocentrism because it uses human experience as a valuable starting point for studying consciousness, rather than claiming the superiority of models of human consciousness. Through benign anthropocentrism, it is possible to leverage our understanding of human consciousness to develop a framework that allows us to systematically investigate the profiles of human and non-human conscious experiences by providing a basis for comparison, highlighting the similarities and differences between human and non-human consciousness.

Benign anthropocentrism opens up new possibilities leading to a deeper understanding of AI consciousness, ethical considerations, and the practical applications of AI technology. First, benign anthropocentrism allows researchers to compare AI consciousness to human consciousness in a systematic and unbiased manner, and provides a structured framework for understanding how AI experiences the world in relation to humans. Second, benign anthropocentrism can be used to guide the development of AI models that incorporate aspects of consciousness or subjective experience which create AI systems that can interact with and understand human consciousness more effectively, leading to more empathetic and human-centred AI applications. Third, by recognizing the similarities and differences in how both humans and AI systems experience the world, it becomes easier to design AI systems that complement human abilities and enhance various aspects of human life, hence facilitating collaboration between humans and AI systems.

References

Andrews, K. (2020). How to Study Animal Minds, Cambridge: Cambridge University Press.

Birch, J., A. K. Schnell, and N. S. Clayton. (2020). Dimensions of animal consciousness. Trends in Cognitive Sciences, 24(10): 789–801. Miller, G. (2005). What is the biological basis of consciousness?. Science, 309(5731), 79-79.

Povinelli, D. J. (2004). Behind the ape’s appearance: Escaping anthropocentrism in the study of other minds. Daedalus, 133(1): 29–41.

Shriver, A. (2017). The unpleasantness of pain for nonhuman animals. In The Routledge handbook of philosophy of animal minds (pp. 176-184). Routledge.

Signorelli, C. M., & Meling, D. (2021). Towards new concepts for a biological neuroscience of consciousness. Cognitive Neurodynamics, 15(5), 783-804.

Thompson, E. (2010). Mind in life: Biology, phenomenology, and the sciences of mind. Harvard University Press.

Tye, M. (2017). Philosophical problems of consciousness. The Blackwell companion to consciousness, 17-31.

 

Robert William Clowes       

Thinking Creating and Feeling with Generative AI: Incorporating AI in our Cognitive and Affective Lives as Extended Minds and Virtual Personalities

N/A

 

Roman Krzanowski

Human Nature and Artificial Intelligence: Sizing the Gap

N/A

 

Tugba Yoldas

Success Semantics, Egocentric Motivations, and Artificial Moral Patients

N/A

 

Johannes Brinz        

Virtuality and Reality: The Simulation-Replication Distinction and its Implications for AI

Triggered by the rapid development in the field of AI, calls have been made to pause

the technological progress in order to better assess the potential risks and benefits of the mass use of AI systems. One point of contention is the question of whether AI systems might eventually become conscious. While there is a long tradition in philosophy concerned with artificial consciousness in digital computers, the implications of the use of brain-like so-called neuromorphic hardware have so far been widely neglected. The present paper tries to make up for those shortcomings by putting forward an account that draws on the distinction between simulations and replications of computational models. Indeed, while simulations

only implement the abstract causal structure of a model, replications additionally recreate the replicating causal structure. In this paper, I argue that AI algorithms running on neuromorphic hardware are more likely to generate artificial consciousness than simulations on standard digital hardware: this is because they recreate not only the abstract, but parts of the replicating causal structure of biological brains as well.

 

Jumbly Grindrod

Transformer architectures and the radical contextualism debate

I will argue that the widely-used transformer architecture (Vaswani et al., 2017) suggests a novel and interesting approach to the relationship between context and meaning. I will show this by situating that approach within two related debates: the radical contextualism debate and the polysemy debate. Focusing specifically on the self-attention mechanism, I will argue that while the transformer picture is committed to widespread, linguistically unlicenced context-sensitivity, it does not constitute a form of radical contextualism, in Recanati’s (2003) sense. I will also argue that the transformer approach to capturing polysemy is a novel combination of two current approaches to polysemy: what Trott and Bergen (2023) label the meaning continuity approach and the core representation approach.

 

Laura Haaber Ihle and Annika M. Schoene         

Ethics Guidelines for AI-based Suicide Prevention Tools

N/A

 

Luke Kersten

Mechanistic Approach to the Computational Sufficiency Thesis

N/A

 

Maria Federica Norelli et al.          

Data, Phenomena and Models in Machine Learning       

N/A

 

Māris Kūlis

Lost in Translation: The Trap of Bad Metaphors in AI's Deceptive Simplicity

Metaphors in AI do more than illustrate; they shape thought, research, and ethical postures. Across history, metaphors have served to simplify complex scientific concepts by creating bridges between the familiar and the unknown, exemplified notably by the "AI as human" and "brain-computer" comparisons. The "brain-computer" metaphor further oversimplifies, equating nuanced cognitive functions with computational operations, potentially obscuring the profound differences between organic minds and machines. This presentation navigates the intricate metaphorical landscape in AI, examining the deceptive simplicity and implications of these analogies. Are they merely convenient linguistic constructs, or do they misguide our understanding of artificial intelligence? A critical dissection of these metaphors seeks to demystify the assumptions and expectations set upon AI, offering a clearer lens through which to view this technological phenomenon.

The entrenchment of certain metaphors in the discourse around artificial intelligence perpetuates critical misunderstandings. By framing AI in the mold of human cognition or computational processes, we inadvertently set unrealistic expectations. This anthropomorphic and mechanistic viewpoint neglects AI's unique nature, potentially leading to its misguided development and regulation. For instance, the "AI as human" metaphor might compel us to consider rights for entities that do not hold sentience, while the "brain-computer" analogy could diminish efforts towards understanding AI as an autonomous system rather than an extension of human cognition. While metaphors have been crucial in progressing intellectual thought, they carry intrinsic limitations. These metaphors hold significant ethical implications, as viewing, for example, the mind as a machine or emotions as simple pressure valves could distort our appreciation of human consciousness, emotional depth, and the value of life.

Here phenomenology, with its rigorous exploration of lived experiences and consciousness, provides a framework for deconstructing the anthropomorphic and mechanistic metaphors that dominate perceptions of AI. This study involves a hermeneutical analysis of language, conceptual models, and the subjective underpinnings within the dominant metaphors, relying on philosophical argumentation and historical context. It scrutinizes the metaphors' validity and their capacity to capture the true 'phenomenon' of artificial intelligence, seeking to establish a more authentic framework for understanding AI. Moreover, in the intersection of language philosophy and AI, the role of hermeneutics becomes crucial in dissecting the 'bad metaphors' that often plague discussions about AI. These metaphors, though simplifying complex concepts, can distort our understanding of what AI is and what it can become.

By deconstructing the "AI as human" and "brain-computer" metaphors, we can expect to uncover their restrictive and possibly misleading influences, which often compel us to transpose human attributes or computational functions onto entities where they might not wholly belong. This anthropocentric skew potentially blinds us to AI's unique 'otherness'. Enabling a new discourse free from existing metaphors could provide AI developers with insights for more informed programming and design, avoiding projections rooted in human experience.