PhAI 2025 – Programme
Day 1 (Thursday, 23.10.2025)
09:30–10:00 | Registration & Coffee | |
10:00–11:30 | Keynote A in room HG12A-00 | |
Markus Kneer | ||
University of Graz | ||
11:30–12:00 | Coffee Break | |
12:00–13:30 | Sessions A | |
Rooms: | HG5A-33 | HG10A-33 OR HG8A-33 |
Pierre Beckmann | Andre Curtis-Trudel and Preston Lennon | |
Information Recall in Deep Learning: Beyond the Feature Combination Paradigm | Evaluating representationalist folk mentalism about LLMs | |
James McIntyre | Carson Johnston | |
The Right to Restrict AI Training | Navigating the Impact of Computational Science on the Concept of Epistemic Agency | |
Xuyang Zhang | Leora Sung and Avigail Ferdman | |
Against the Biological Objection to Strong AI | Self-Knowledge and AI Companions | |
13:30-15:00 | Lunch Break | |
15:00–16:30 | Sessions B | |
HG5A-33 | HG10A-33 OR HG8A-33 | |
Slater, Townsen Hicks, Humphries | Ayoob Shahmoradi | |
ChatGPT is Still Bullshit | Does Thinking Require Sensory Grounding? | |
Merel Semeijn | Oshri Bar-Gil | |
Botspeech? Bullshit! | Toward a Relational Ethics Framework for AI: Integrating Postphenomenological Analysis with Care-Centered Design Principles | |
Eliot Du Sordet | Tuhin Chattopadhyay | |
Causal Representation Problems in LLMs’ World Models | Synthetica: Toward a Unified Ontology of Artificial Consciousness | |
16:30–17:30 | Coffee Break & Posters I | |
17:30–18:30 | Keynote B in Room HG12A-00 | |
TBD | ||
TBD | ||
18:30–20:00 | Reception / Borrel: Drinks & Fingerfood (Lobby) |
Day 2 (Friday, 24.10.2025)
09:00–10:30 | Keynote C | |
Emily Sullivan | ||
University of Edinburgh | ||
10:30–12:00 | Sessions C | |
HG5A-33 | HG8A-33 | |
Samuela Marchiori | Pelin Kasar | |
Low-code/no-code AI platforms and the ethics of citizen developers | There Is a Problem, But Not a Responsibility Gap | |
Tobias Henschen | Jan Michel | |
Algorithmic decision-making and equality of opportunity | Scientific Discovery and the Little Helper LLM: Proxy, Partner, or Pioneer? | |
Susana Reis | Maud van Lier | |
Beyond Inductive Risk: Toward a Broader Epistemic Framework for Value-Laden Decisions in Machine Learning Models | The Role of the Environment in Agency Debates | |
12:00–13:30 | Lunch | |
13:30–15:00 | Sessions D | |
HG5A-33 | HG8A-33 | |
Linus Ta Lun Huang and Ting-An Lin | Iwan Williams, Ninell Oldenburg, et al. | |
AI, Normality, and Oppressive Things | Mechanistic Interpretability Needs Philosophy | |
Davide Beraldo | Marius Bartmann and Bert Heinrichs | |
(How) do machines make sense? Ethnomethods, technomethods and mechnomethods. | Large Language Models As Semantic Free Riders | |
Jakob Ohlhorst | Fabio Tollon and Guido Löhr | |
Folie à 1 – Artificially induced delusion and trust in LLMs | Assertions from the Margins: On AI Answerability | |
15:00-16:00 | Break | |
16:00–17:30 | Sessions E | |
HG5A-33 | HG8A-33 | |
Michael Lissack and Brenden Meagher | Matteo Baggio | |
Distributing Agency: Rethinking Responsibility in AI Development and Deployment | Counter-Closure Principles, AI, and the Challenge of Conveying Understanding | |
Sonja Spoerl, Andrew Rebera, Fabio Tollon and Lode Lauwaert | Nina Poth and Annika Schuster | |
“Virtue Theatre”: Artificial Virtues and Hermeneutic Harm | Representation: Mental, Scientific – and Artificial? | |
Kris Goffin | ||
Fear Bots: Should we be afraid of proto-fearful AI? | ||
Keynote D HG 14 00 | ||
17:30–19:00 | TBD |
Accepted Posters
Authors | Title |
---|---|
Uchizi Shaba | Mind Uploading |
Hans Van Eyghen | AI-systems are more transparent than humans. A modest plea for relying more on AI. |
Michael Lissack and Brenden Meagher | LLMs as Epistemic Tools: Exformation and the Architecture of Machine Explanation |
Roman Krzanowski | Intentionality and the Limits of LLMs |
Wolfhart Totschnig | When will an AI start asking for reasons? |
Alexandru Mateescu | From Artificial Intelligence to Artificial Influence: Philosophical Reflections on Personalized Persuasion and Educating for Autonomy |
Brian Ball, Alex Cline, David Freeborn, Alice Helliwell and Kevin Loi-Heng | Concepts and Classification Algorithms: A Case Study Involving a Large Language Model |
Peter Tsu | The Ethical Frame Problem and Moral Perception Situated in a Form of Life |
Rayane El Masri and Aaron Snoswell | Towards Attuned AI: Integrating Care Ethics in Large Language Model Development and Alignment |
Claas Beger | Towards AI Collaborators: Exploring Goal, Value and Role-Based Alignment |
Markus Pantsar | Artificial and human mathematical reasoning |
Enrique Aramendia Muneta | AI and consciousness: How long is the shadow of epiphenomenalism? |
Adrian Cussins | life and intelligence are different responses to the same cognitive challenge |
Huseyin Kuyumcuoglu | Contractualist Solution for the Fairness Impossibility Theorem |
Sabato Danzilli | The writing of the query as a hermeneutical act |
Elina Nerantzi | Between persons and things: AI agents in Criminal Law |
Jonathan Pengelly | Moral Cartography and Machine Ethics |
Cecilia Vergani | The Social and Political dimension of Work: Technological Unemployment as a Threat to human cooperation, social integration and solidarity. |
Konstantinos Voukydis | Phenomenal Consciousness in the Age of Large Language Models |
Oliver Hoffmann | Framing Subjects and Objects |
Frieder Bögner | Attention economy, exploitation and recognition-based harms |
Sandrine R. Schiller and Filippos Stamatiou | What Is Human in the Age of Artificial General Intelligence?: Revisiting Aristotle’s Account of Natural Slavery |
Qiantong Wu | The Philosophical Zombie and The Possibility of AI Consciousness in Large Language Models |
Rokas Vaičiulis | The Externalist Implications of Machine Learning Epistemology: Empirical Knowledge and Its Social Dimension in the Accounts of C. Buckner and M. Pasquinelli |
Hannah Louise Mulvihill, Taís Fernanda Blauth, Oskar Josef Gstrein and Andrej Zwitter | A systematic review of values integral to ethical design frameworks for the governance of artificial intelligence |
Daniel Hromada | Prelude to Hermeneutics of Latent Spaces |
Andrew Richmond | How not to do theory transfer: Representation in philosophy and explainable AI |