Presentations C Abstracts
Session E
Giacomo Figà Talamanca
From AI to Octopi and Back: Why AI Systems Might Look Like (but Should Not Be Seen) as Agents
N/A
Charles Rathkopf
Do LLMs Believe?
N/A
Kris Goffin
Emotion Recognition Software, Bias and Emotional Complexity
Emotion Recognition Software (ERS) is software, sometimes enhanced by artificial intelligence (AI), that aims to track human emotions from various inputs, such as facial expressions and vocal sounds (Barrett et al. 2019; Picard, 1997; Ezzameli & Mahersia, 2023). Companies employ ERS to recognize emotions, such as inferring happiness from a consumer’s smile. ERS is utilized for targeted advertising: emotions of people are tracked to be harnessed to derive their preferences so that they can receive more personalized advertisements. ERS can also be adapted to make hiring decisions. The HR department of a company might decide to implement ERS to assist them with their decisions by trying to figure out which candidate is more anxious, and which one is more stable.
However, there is a significant problem with ERS: it is biased. For instance, ERS often displays a gender bias (Domnich & Anbarjafari, 2021): women are more often misidentified as anxious compared to men. Using this ERS application in hiring decisions can negatively impact women’s chances of getting hired. ERS very often makes mistakes which can reproduce and reinforce existing stereotypes and prejudices based on gender and race.
These are, I will argue, the reasons why ERS is so often biased:
(1) It does not take into account the complexity of emotional content.
(2) It is used to make inferences,that are too simple.
First, I will analyze the emotion theory behind the ERS applications. I will investigate which theory of emotion is implied in ERS applications and how this might contribute to bias. Many ERS applications, for instance, rely on Basic Emotion Theory (BET), which states that people only have a limited set of universal basic emotions (Ekman & Friesen 2003). Each emotion corresponds to a characteristic facial expression: for example, joy corresponds to a smile. I will argue that BET-based ERS is problematic. In previous work, I have argued that emotion is complex and that basic emotion labels, such as “fear”, “sadness”, “joy”, are too limited to fit the rich diversity of emotional experiences. Similarly, I will argue that, BET-based software cannot deal with emotions that exceed the simple categories of “sadness” and “fear”, such as an existential dread that includes both anxiety and sadness.
Secondly, I will argue that a similar mistake is made in the application of ERS, outside of the ERS itself. Too often, ERS applications are used to make inferences that do not take into account the complexity of the emotion. One uses ERS to figure out a person’s preferences (“if a person smiles, they most like the product”) or conclude that a scowling immigrant might be dangerous. It is known that biases arise when we want to make simple inferences and use “short cuts” in reasoning processes.
Nicolas Kuske
Consciousness in Artificial Systems: Bridging Sensorimotor Theory and Global Workspace in In-Silico Models
In the aftermath of the success of attention-based transformer networks, the debate over the potential and role of consciousness in artificial systems has intensified. Prominently, the global neuronal workspace theory emerges as a front-runner in the endeavor to model consciousness in computational terms. A recent advancement in the direction of mapping the theory onto state-of-the-art machine learning tools is the model of a global latent workspace. It introduces a central latent representation around which multiple modules are constructed. Content from any one module can be translated to any other module and back with minimal loss. In this talk we lead through a thought experiment involving a minimal setup comprising one deep sensory module and one deep motor module, which illustrates the emergence of latent sensorimotor representations in the intermediate layer connecting both modules. In the human brain, law-like changes of sensory input in relation to motor output have been proposed to constitute the neuronal correlate of phenomenal conscious experience. The underlying sensorimotor theory encompasses a rich mathematical framework. Yet, the implementation of intelligent systems based on this theory has thus far been confined to proof-of-concept and basic prototype applications. Here, the natural appearance of global latent sensorimotor representations links two major neuroscientific theories of consciousness in a powerful machine learning setup. As one of several remaining questions it may be asked, is this artificial system conscious?
Session F
Peter Königs
Negativity Bias in AI Ethics and the Case for AI Optimism
Flipping through the major journals in ethics of technology, one gets the impression that the use of AI is deeply problematic in virtually countless different ways. The big debates in AI ethics almost invariably revolve around problems, such as the emergence of responsibility gaps, bias and fairness issues, job displacement, or privacy concerns, to name but a small fraction. A leading AI ethicist rightly observes a 'rising tide of panic about robots and AI' (Danaher 2019). I want to offer a more optimistic perspective. The 'rising tide of panic' is probably to a good extent the result of a built-in negativity bias in AI ethics. This means that we have higher order evidence to believe that AI is ethically less problematic, and less in need of regulation, than the somewhat panicky mood permeating AI ethics suggests.
To recognize this bias, one must consider two things: 1) The specifics of the subject matter of AI ethics. 2) The incentive structures within academic AI ethics.
The subject matter of AI ethics, and of ethics of technology in general, are technologies and specific technological applications. Technologies can be understood as tools devised by humans to solve problems. One could say that the subject matter of AI ethics is (proposed) solutions. AI ethicists wishing to comment on their subject matter are practically forced to find fault with these solutions, and the natural way for them to do so is by identifying ethical problems. As the law of the instrument notes, when the only tool you have is a hammer, everything can look like a nail. In AI ethics, the go-to tool for ethicists is identifying ethical issues. Thus, there is a constant inclination to pinpoint flaws in AI applications and depict them as ethically problematic – whether they are or not. This distinguishes ethics of technology from other subfields of philosophy, where the cause for philosophizing is not solutions, but long-standing problems ('What is justice?', 'When is a belief justified?', 'What is consciousness?', etc.). In these subfields, philosophers are, by and large, seeking to find solutions, namely answers to these questions, not problems.
This built-in imperative to find ethical problems is amplified by the incentive structure within academia. AI ethicists must publish and are therefore structurally encouraged to keep identifying problems with AI. Moreover, alarmist papers that raise ethical concerns tend to receive more attention and recognition than response pieces that deflate a problem. By the same token, a funding proposal in AI ethics that does not portray AI as, in one way or another, ethically problematic, has little chance of approval.
From this it does not follow that the problems discussed in AI ethics are fictitious. Every ethical concern about AI needs to be taken seriously and deserves to be considered on its own terms. However, three things do seem to follow:
1. We, the AI Ethics community, are probably inflating the ethical problems associated with AI.
2. Legislators should proceed with caution in their regulatory efforts and consider the possibility that some problems with AI are being overestimated.
3. AI ethicists should be aware of the built-in negativity bias in AI ethics and seek to counteract it in their capacity as authors and reviewers.
These somewhat theoretical speculations will be supported by a discussion of one plausible example of an unwarranted tech panic that reflects a negativity bias in AI ethics: The case of 'surveillance capitalism' and its impact on politics. The term is disparagingly used to refer to the business model of tech companies that monetize user data, such as Google, Facebook or YouTube. The prevailing narrative, both in popular discourse and much of the philosophical debate, is extremely concerned, if not outright doomerist. It has been alleged that democracy is involved in a 'death match' with 'surveillance capitalism' (Zuboff 2022). However, a look at the empirical evidence does not support this dystopian picture. Empirical studies suggest that fears about algorithm-induced misinformation, filter bubbles, echo chambers, and polarization are exaggerated (e.g. Altay et al. 2023; Bruns 2019; Kupferschmidt 2023; Ross Arguedas et al. 2023). The debate surrounding the political impact of 'surveillance capitalism' thus likely represents one case to which the aforementioned recommendations apply:
1. The ethical problems associated with 'surveillance capitalism' have been exaggerated.
2. Calls for regulatory action might be premature and misguided, potentially leading to more harm than good.
3. Commentators in this debate should strive to maintain a balanced perspective and be cautious not to be swayed by what many studying the empirical data think is a moral panic.
If my assessment is correct that AI ethics has succumbed to a systematic negativity bias, we have reason to believe that the 'rising tide of panic about robots and AI' is to a considerable extent unwarranted. It is likely that AI ethicists have been over-diagnosing ethical problems. We thus possess higher-order reason to be more optimistic about AI.
Floriana Ferro
The RV Continuum as Flesh: A Phenomenological Interpretation of Mixed Reality
The Digital Revolution has ushered in a new era of perceptual experiences, stemming from the emergence of novel environments. Our daily lives in the analogue world now intersect with various dimensions, such as on-screen, virtual, and augmented realities. This wide intersection of analogue and digital dimensions is collectively known as Mixed Reality (MR), with its key technical features outlined by Constanza, Kunz, and Fjeld (2009). The complexity of perception within digital environments, heavily influenced by Artificial Intelligence (AI) in MR settings, gives rise to a host of profound questions. How do we experience our body and the objects around us, when immersed in Virtual Reality (VR) or Augmented Reality (AR)? What similarities and differences exist between our experiences in the analogue world and those in MR? How can we precisely define MR? In this discussion, my particular focus lies in delineating the concept of MR through the Merleau-Pontian idea of “flesh,” aiming to craft a phenomenological theory of perception applicable to both analogue and digital dimensions.
(1) First, I begin with the perspective presented by Milgram and Kishino (Milgram et al., 1994; Milgram & Kishino, 1994), who view MR as belonging to the reality-virtuality (RV) continuum. According to this perspective, the MR spectrum encompasses digital environments that refer to the analogue world, including AR and AV (Augmented Virtuality). This perspective on reality has recently been revisited by Skarbez and others (2021), who incorporate VR in the MR spectrum. They also introduce a discontinuity at the far end of the continuum, a “Matrix-like” virtual environment. After presenting these two variations of the MR spectrum, I introduce my own perspective, which combines (a) Milgram and Kishino’s view of a continuous spectrum of reality that bridges the analogue and virtual dimensions, and (b) Skarbez, Smith, and Whitton’s idea of encompassing VR environments within the MR spectrum. In contrast to these two viewpoints, I challenge the notion of a completely analogue dimension (the so-called “real world”) and a completely digital one (an entirely immersive VR). My objection is based on the pervasiveness of digital devices, which has made it increasingly difficult to conceive of a purely analogue dimension.
(2) Secondly, to illustrate the soundness of this revisitation of MR, I draw inspiration from the phenomenological concept of the flesh, as theorized by Maurice Merleau-Ponty in The Visible and the Invisible (Merleau-Ponty, 1968). Merleau-Ponty develops the idea of an “extended body” that overcomes subjective experience and encompasses connections with both living and non-living entities. At the core of this interpretation are several passages in The Visible and the Invisible, where the flesh is depicted as a multi-dimensional and multi-level common element, having a “virtual focus” or a “virtual center” (Merleau-Ponty, 1968: 34, 115, 146, 215). Drawing inspiration from these passages and Pierre Lévy’s perspective on the virtual, the latter is defined not as the opposite of the “real,” but in relation to the “actual” (Lévy, 1998): the virtual is not merely what is possible, but what is in the process of becoming actual. This idea is rooted in a dynamic understanding of reality, where singularities relate and move towards each other. Merleau-Ponty’s flesh is thus a virtual body, since it is “in essence interactive” (Diodato, 2012: 2): the virtual is the tissue of reality, characterized by multiple layers and networks of multiple singularities. I argue that Merleau-Ponty’s idea of the virtual challenges our common understanding of VR, which can be better defined as digital reality (Chalmers, 2017), and that his concept of the virtual applies to the whole MR spectrum.
(3) This paves the way for a posthuman reading of Merleau-Ponty’s phenomenology of the flesh (Ferro, 2021), a reading aligned with the second principle of Robert Pepperell’s Posthuman Manifesto, which delves into the profound technological transformation of the human species (Pepperell, 2003). Referring to Merleau-Ponty’s thought implies that this transformation is linked to the idea of an extended body (the flesh), viewed as a co-participation and interpenetration between analogue and digital components. This intertwining gives rise to a deep relationship between analogue and MR environments, a “transdimensional analogy” (Ferro, 2022) characterized by “mixed intentionality,” which pertains to the connection between human flesh and technology. This concept shares similarities with “hybrid intentionality” developed in post-phenomenology (Verbeek, 2008) and its “posthuman vision” (Verbeek, 2007). Despite the analogies between post-phenomenology and Merleau-Ponty’s later thinking (Hoel & Carusi, 2015), differences exist in their conception of the role of technology in the transformation of human bodily experience. Postphenomenology considers the interplay between subject and object as a triadic construct, involving the subject, technology, and the object. While “hybrid intentionality” goes beyond what Verbeek terms “technologically mediated intentionality” (Verbeek, 2008: 390-392), the amalgamation of human and technological elements primarily intensifies mediation and focuses on intentionality, characterized by active synthesis (Husserl, 2001). In contrast, a Merleau-Pontian interpretation of MR overcomes mediation, taking into account passive synthesis (Husserl, 2001), and views the flesh as an interpenetration between living and non-living components.
In conclusion, I demonstrate that the RV continuum, at the core of which MR essentially resides, can be defined as an amalgamation of the human living body (Leib) and digital technology, experienced at different levels and with different degrees of intensity. I also highlight that applying the idea of flesh to the RV continuum opens up new possibilities for the development of immersive MR environments.
Cited references:
- Chalmers, D. (2017). The Virtual and the Real. Disputatio, 9(46), 309-352.
- Constanza, E., Kunz, A., & Fjeld, M. (2009). Mixed Reality: A Survey. In D. Lalanne & J. Köhlas (Eds.), Human Machine Interaction: Research Results of the MMI Program (pp. 47-68). Springer.
- Diodato, R. (2012). Aesthetics of the Virtual (J.L. Harmon, Trans.) SUNY Press. (Original work published 2005)
- Ferro, F. (2021). Merleau-Ponty and the Digital Era: Flesh, Hybridization, and Posthuman. Scenari, 15, 189-205.
- Ferro, F. (2022). Perceptual Relations in Digital Environments. Foundations of Science. Advance online publication.
- Hoel, A.S., & Carusi, A. (2015). Thinking Technology with Merleau-Ponty. In R. Rosenberger & P.-P. Verbeek (Eds.), Postphenomenological investigations: Essays on Human.Technology Relations (pp. 73-84). Lexington Books.
- Husserl, E. (2001). Analyses Concerning Passive and Active Synthesis: Lectures on Transcendental Logic (A.J. Steinbock, Trans.). Kluwer. (Original works published 1966, 2000)
- Lévy, P. (1998). Becoming Virtual: reality in the Digital Age (R. Bononno, Trans.). Plenum Trade. (Original work published 1995)
- Merleau-Ponty, M. (1968). The Visible and the Invisible (A. Lingis, Trans.). Northwestern University Press. (Original work published 1964)
- Milgram, P., et al. (1994). Augmented Reality: A class of displays on the reality-virtuality continuum. Proceedings of SPIE, Telemanipulator and Telepresence Technologies, 2351, 282–292.
- Milgram, P., & Kishino, F. (1994). A Taxonomy of Mixed Reality Visual Displays. IEICE Transactions on Information Systems, 77, 1321-1329.
- Pepperell, R. (2003). The Posthuman Condition. Consciousness Beyond the Brain. Intellect books.
- Skarbez, R., Smith, M., Whitton, M.C. (2021). Revisiting Milgram and Kishino’s Reality-Virtuality Continuum. Frontiers in Virtual Reality, 2, 647997.
- Verbeek, P.-P. (2007). Beyond the Human Eye: Technological Mediation and Posthuman Visions. In Petran Kockelkoren (Ed.), Mediated Vision (pp. 43-53). Veenman Publishers and ArtEZ Press.
- Verbeek, P.-P. (2008). Cyborg intentionality: rethinking the phenomenology of human-technology relations. Phenomenology and the Cognitive Sciences, 7, 387–395.
Avigail Ferdman
AI and Deskilling of Human Capacities
AI creates a serious risk of “moral deskilling”: obviating the need for humans to employ skillful moral judgement, by relegating these judgments to machines. Shannon Vallor and others argue that the risk is that the very ability that humans have, to form moral judgments, will be eroded [1]. When decisions about killing in war are farmed out to autonomous systems, the human actor is no longer able to perform a moral role, since they are unable to examine and evaluate the moral wisdom of a machine agent’s decision. Or, in the case of caregiving robots, relegating caregiving activities to robots may lead to the deskilling in the ‘arts of personhood’ through relationships of dependency that humans share with others. This is because when robots take over the caregiving activities, humans will no longer need to cultivate holistic caring practices or perceive caregiving as an ineradicable human responsibility [2].
The philosophical treatment of moral deskilling is primarily focused on the risks of relegating moral practices to technologies: automated weapons technology, new media practices, and social robotics [3]. In this paper I propose that the worry about deskilling encompasses more then moral capacities: it should extend to other human capacities as well: epistemic, social, creative and more. This has both ethical and moral implications, as follows.
From an ethical perspective, I argue that the risk from AI and other technologies is not restricted to the degradation of moral capacities, but potentially to many of our innate human capacities. According to a perfectionist (neo-Aristotelian) view of ethics [4], humans have innate capacities such as the capacity to reason, social capacities, moral capacities, capacity for creativity, capacity to will (willpower). Also, as embodied beings, humans have physical capacities. Humans flourish when they excel at realizing those capacities—by engaging in activities that trigger the capacities and bring about some intrinsically valuable output [5]. For example, humans flourish when they exercise their rational capacity, in the activity of learning, which produces new knowledge. Technologies like ChatGPT and social media could replace many of the activities that trigger the exercise of the human capacities, impoverishing them and possibly rendering them obsolete. For example, social media creates epistemic bubbles and echo chambers that that exploit our cognitive vulnerabilities and weaknesses, optimizing the platform to seduce users by creating a feeling of clarity, without any commitment to presenting a belief system that actually captures the world itself [6]. These environments suppress crucial epistemic activities like critical thinking and active quest for reliable knowledge. Gamified social media platforms like Twitter use scores (number of likes, retweets or followers) as a heuristic for human communication [7]. While ordinary human communication is subtle, rich and complex, gamified communication impoverishes the human capacity to communicate, by replacing the capacities for empathy, patience and reciprocity, which are necessary for meaningful human relationships [8]. ChatGPT replaces the human activity of generating original text, which arguably degrades the capacity for creativity [9]. Additionally, researchers warn that in education, using ChatGPT could supplant rather than supplement human cognitive functions like critical thinking or memory retention [10].
On a perfectionist view of human well-being, deskilling of human capacities due to the proliferation of AI is bad, as it impoverishes the capacities necessary for engaging in meaningful human activity. Yet there is also an important moral dimension as well. AI tools like gamified social media and ChatGPT are arguably contributing to the creation of capacity-hostile environments: hostile in the sense that the environment actively limits humans’ ability to develop and exercise their human capacities. I draw on John Rawls’ idea of ’society’s basic structure’ [11] to argue that dismantling hostile environments is a matter of justice. The basic structure is the interconnected system of rules and practices that embody the political constitutions, legal procedures, the institution of property, the markets and the institution of the family [12]. While the basic structure affects citizens’ lives and opportunities in fundamental ways, it does so not only without their consent but without their being able to have much influence in the matter [13]. Combining the idea of the basic structure with the perfectionist commitment to ensure that humans flourish, we arrive at the following: a perfectionist basic structure should ensure that all persons have an opportunity to develop and exercise their human capacities [14]. AI tools that create capacity-hostile environments are undermining the justice of the basic structure. As such, we have reason to regulate against AI tools that contribute to capacity-hostile environments that deskill human capacities.
[1] Shannon Vallor, “Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character,” Philosophy & Technology 28, no. 1 (2015): 107–24, https://doi.org/10.1007/s13347-014-0156-9; Pak-Hang Wong, “Rituals and Machines: A Confucian Response to Technology-Driven Moral Deskilling,” Philosophies 4, no. 4 (2019): 59–0, https://doi.org/10.3390/philosophies4040059.
[2] Vallor, “Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character,” 121.
[3] Vallor, 114–21.
[4] Thomas Hurka, Perfectionism (New York: Oxford University Press, 1993); Richard Kraut, What Is Good and Why: The Ethics of Well-Being (Cambridge, Mass.: Harvard University Press, 2007); Gwen Bradford, “Problems for Perfectionism,” Utilitas 29, no. 3 (2017): 344–64.
[5]Bradford, “Problems for Perfectionism”; Gwen Bradford, “Perfectionist Bads,” The Philosophical Quarterly 71, no. 3 (2021): 586–604, https://doi.org/10.1093/pq/pqaa055.
[6] C. Thi Nguyen, “Hostile Epistemology,” Social Philosophy Today 39 (2023): 9–32.
[7] C. Thi Nguyen, “How Twitter Gamifies Communication,” in Applied Epistemology, ed. Jennifer Lackey (New York: Oxford University Press, 2021), 410–36.
[8] Shannon Vallor, “Flourishing on Facebook: Virtue Friendship & New Social Media,” Ethics and Information Technology 14, no. 3 (2012): 185–99, https://doi.org/10.1007/s10676-010-9262-2.
[9] Ted Chiang, “ChatGPT Is a Blurry JPEG of the Web,” The New Yorker, February 9, 2023, https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-....
[10] Long Bai, Xiangfei Liu, and Jiacan Su, “ChatGPT: The Cognitive Effects on Learning and Memory,” Brain‐X 1, no. 3 (2023): e30, https://doi.org/10.1002/brx2.30.
[11] John Rawls, Political Liberalism (New York: Columbia University Press, 1996), 269.
[12] Samuel Freeman, “Introduction,” in The Cambridge Companion to Rawls, ed. Samuel Freeman (Cambridge: Cambridge University Press, 2003), 1–62.
[13] Thomas M. Scanlon, “Rawls on Justification,” in The Cambridge Companion to Rawls, ed. Samuel Freeman (Cambridge: Cambridge University Press, 2003), 139–67.
[14] Avigail Ferdman, “A Perfectionist Basic Structure,” Philosophy & Social Criticism 45, no. 7 (2019): 862–82, https://doi.org/10.1177/0191453718820891.
John Dorsch
Explainable AI in Automated Decision Support Systems: Reasons, Counterfactuals, and Model Confidence
N/A