Program

Created on: 02.09.2011 | Last updated on: 02.10.2011

All times given include discussion time. Speakers are expected to leave about 1/3 of time for discussion.

Monday, 03.10.2011

8:45 Bus transfer to Anatolia College (two buses)
Bus 1: "Plateia Eleftherias", Port entrance, corner Ionos Dragoumi/Leoforos Nikis (seafront)
Bus 2: "Levkos Pyrgos" (White Tower) on Leoforos Nikis (seafront)

9:45 Welcome and introduction (Vincent C. Müller)

10:00-11:00 Keynote Hubert Dreyfus "Are we there yet? And is it what we expected?"
[Location: ACT New Building Amphitheater]

11:00-11:30 Coffee

11:30-13:30 Sections (4 x 3)

 Session A
(New Building, Conference Room)
Session B
(Bissell Library, Teleconferencing Room)
Session C
(Bissell Library, 2nd floor)
 Chair: FranchiChair: SchmidtChair: Morse
11:30
12:00
Marcin Miłkowski Tarek Richard Besold Claudius Gros
Limits of Computational Explanation of Cognition
Turing Revisited
Emotional control - conditio sine qua non for advanced artificial intelligences?
ABSTRACT: ›Hide‹ In this talk, I want to focus on cognitive phenomena that cannot be explained computationally. From the very beginning, research on Artificial Intelligence had two goals: create artificial cognitive systems and explain the behavior of natural cognitive systems in the same way the artificial systems are explained. The second goal was based on the assumption that artificial systems are good models of natural ones, and that means that they share the relevant causal organization that underlies their behavior (for an early expression of this view, see Craik 1943). Yet, early AI systems were usually created without much theoretical analysis beforehand, and the enthusiasm for them could not be easily justified, especially in the areas where human cognitive behavior seemed much more flexible than rule-driven processing of symbols. The computational approach to cognition was criticized exactly for this reason (Dreyfus 1972).

All similar criticisms notwithstanding, computational explanation of cognitive systems remains the core of cognitive science, even in the enactive research program, and many dynamical accounts of cognition share most important assumptions of computationalism. Computational models abound in neuroscience.

In the first part of my talk, I will shortly sketch a mechanistic account of computational explanation that spans multiple levels of organization of cognitive systems (Piccinini 2007, Miłkowski forthcoming). I argue that this account is both descriptively and normatively adequate for most current research in cognitive science. In the second part of the talk, I will focus on what is impossible to explain about cognitive systems in this way.

First of all, mechanistic explanation (Machamer, Darden & Craver 2000) of a phenomenon does not explain the bottom level of a mechanism, i.e. its parts and their organization. There might be an explanation of why they are organized this way and not another, and why they are what they are, but this is not a part of the same explanation. Most importantly in this context, this means that one cannot explain the makeup of parts that constitute a computational system. In other words, if they play any role in cognitive phenomena, one needs to explain their contribution. This contribution will however be (most usually, with a possible exception for multiply nested virtual machines) non-computational. An obvious example is that the speed of a machine depends not only on its algorithms but also on its hardware capacities. The relevant hardware capacities of a transistor-based machine cannot be explained computationally. Yet, obviously, the processing speed does influence the cognitive performance. In other words, not all time-related aspects are explainable computationally (even if reaction time is one of the main experimental data used to test hypotheses about computations).

Similarly, the top level of mechanistic explanation, the contextual level, may itself be not computational. At this level, however, one may explain system goals and autonomy (e.g. in terms of feedback, Bigelow, Rosenblueth & Wiener 1943) but such an explanation will require references to environment that is not a part of this mechanism. If the mechanism and its environment is not a part of another computational mechanism (that is, if the mechanism is not nested in another computational system), such an explanation, even if in purely causal terms, will be non-computational. This also is one of the reasons why meaning externalism was opposed so long, though it actually complements computationalism rather than denies it (McClamrock 1995). In other words, representation, if it is explained essentially not only by the way it is encoded (which is encodiginism, criticized by Bickhard & Terveen 1995), but also by the interaction with the non-computational environment, is not a computational phenomenon either.

I will conclude my talk by defending explanatory pluralism in cognitive science, which includes computational explanation and other empirically and theoretically sound explanatory strategies. By drawing on several examples from classical and recent research, from cryptarithmetic (Newell & Simon 1972) and past tense learning (Rumelhart & McClelland 1986) to biorobotics (Webb & Scutt 2000) and dynamical neural modeling (Conklin & Eliasmith 2005), I will show to what extent they already presuppose explanatory pluralism, and at the same time, use computation to explain cognition.
ABSTRACT: ›Hide‹ After a short assessment of the history, the actual status and the overall role of the Turing test within AI, we propose a computational cognitive modeling-inspired decomposition of the Turing test as classical ``strong AI benchmark'' into at least four intermediary testing scenarios: a test for natural language understanding, an evaluation of the performance in emulating human-style rationality, an assessment of creativity-related capacities, and a measure of performance on natural language production of an AI system.
ABSTRACT: ›Hide‹ The vast majority of research in artificial intelligences is devoted to studying algorithms, paradigms and philosophical implications of cognitive information processing, like conscious reasoning and problem solving. Rarely considered is the motivational problem - a highly developed AI needs to set and select its own goals and tasks autonomously.

The most developed cognitive beings on earth, humans, are infused with emotions; they play a very central part in our lives. Is this a coincidence, a caprice of nature, perhaps a leftover of our genetic heritage, or a necessary aspect of any advanced intelligence?

Any living intelligence has to take decisions while considering two limited resources, time and the computational power of its supporting hard- or wetware. Here we argue that emotional control is deeply entwined with both short- and long-term decision making in general purpose cognitive systems and allows to compute in real time approximate solutions to the motivational problem.

Emotional states in the brain are intrinsically related to the neuromodulatory control systems, which may be triggered through cognitive processes, but which act diffusively, regulating neural response properties in extended areas. The neuromodulatory system also controls the release of internal rewards, and hence learning rates and decision making.

In fact, emotional states generically induce problem solving strategies. The cognitive system either tries to keep its present emotional state, in case it is associated with positive internal rewards, or looks for ways to remove the causes for its current emotional state, in case it is associated with negative internal rewards. Emotional control hence represents a way, realized in real-world intelligences, to solve the motivational problem.

Any advanced intelligence needs to be a twofold universal learning system. The intelligent system needs to be on one side able to acquire any kind of information in a wide range of possible environments and on the other side to determine autonomously what to learn, viz solve the time allocation problem. The fact that both facets of learning are regulated through diffusive emotional control in existing advanced intelligences suggests that emotional control may be a conditio sine qua non for any, real-world or artificial, universal intelligence.

A much discussed alternative to emotional control is straightforward utility maximization. This paradigm is highly successful when applied to limited and specializedized tasks, like playing chess, and is as such important for any advanced intelligence. Whether it is possible to formulate an overall utility function for a universal intelligence with an evolving time horizon, and to compute in real time its gradients, remains however doubtful. It is hence likely that advanced artificial intelligences will be endowed with `true' synthetic emotions, the perspective of a hyperintelligent robot waiting emotionless in its corner, until its human boss calls him to duty, seems inplausible.
12:00
12:30
Gordana Dodig Crnkovic Viola Schiaffonati and Mario Verdicchio Joscha Bach
Info-computational Character of Morpohological Computing
The Influence of Engineering Theory and Practice on Philosophy of AI
Enactivism considered harmful
ABSTRACT: ›Hide‹ In recent years, morphological computing emerged as a new idea in robotics, (Pfeifer 2011), (Pfeifer and Iida 2005), (Pfeifer and Gomez 2009) (Paul 2004). From the beginning, based on the Cartesian tradition, robotics treated separately the body/machine and its control. However, successively it became evident that embodiment itself is essential for cognition, intelligence and generation of behavior. In a most profound sense, embodiment is vital because cognition results from the interaction of brain, body, and environment. (Pfeifer 2011)

From an evolutionary perspective it is central that the environment provides a physical source of biological body as well as source of energy and matter for its metabolism. Based on DNA code, body is created through morphogenesis, governing short time scale formation of an organism. On a long time scale morphological computing governs evolution of species. Nervous system and brain evolves gradually through interactions (computational processes) of a living agent with the environment as a result of information self-structuring (Dodig Crnkovic 2008). The environment provides a variety of inputs, at the same time as it imposes constraints which limit the space of possibilities, driving the computation to specific trajectories. This relationship is called structural coupling by (Maturana& Varela 1980) and described by (Quick and Dautenhahn 1999) as “non-destructive perturbations between a system and its environment, each having an effect on the dynamical trajectory of the other, and this in turn effecting the generation of and responses to subsequent perturbations.” Clark (1997) p. 163 talks about "the presence of continuous mutually modulatory influences linking brain, body and world."

In morphological computing modeling of the agents behavior (such as locomotion and sensory-motor coordination) proceeds by abstracting the principles via information self-structuring and sensory-motor coordination, (Matsushita et al. 2005), (Lungarella et al. 2005) (Lungarella and Sporns 2005) (Pfeifer, Lungarella and Iida 2007). Brain control is decentralized based on the sensory-motor coordination through the interaction with environment. Through the embodied interaction with the environment, in particular through sensory-motor coordination, information structure is induced in the sensory data, thus facilitating perception, learning and categorization. The same principles of morphological computing (physical computing) and data self-organization apply to biology and robotics.

Interesting to note is that in 1952 Alan Turing wrote a paper proposing a chemical model as the basis of the development of biological patterns such as the spots and stripes on animal skin, (Turing 1952). Turing did not originally claim that physical system producing patterns actually performed computation through morphogenesis. Nevertheless, from the perspective of info-computationalism (Dodig Crnkovic 2009) we can argue that morphogenesis is a process of morphological computing. Physical process – though not „computational“ in the traditional sense, presents natural (unconventional), physical, morphological computation. Essential element in this process is the interplay between informational structure and the computational process - information self-structuring and information integration, both synchronic and diachronic, going on in different time and space scales.

Morphology is the central idea in understanding of the connection between computation and information. Materials also represent morphology, just on the more basic level of organization – the arrangements of molecular and atomic structures.

Info-computational naturalism (Dodig Crnkovic 2009) describes nature as informational structure – a succession of levels of organization of information. Morphological computing on that informational structure leads to new informational structures via processes of self-organization of information. Evolution itself is a process of morphological computation on a long-term scale. It will be instructive within the info-computational framework to study in detail processes of self organization of information in an agent (as well as in population of agents) able to re-structure themselves through interactions with the environment as a result of morphological (morphogenetic) computation.
ABSTRACT: ›Hide‹ We aim at investigating the relationship of philosophy of AI with the field of philosophy and engineering.

We claim that philosophy of AI can offer some insight in the direction of the foundation of philosophy of engineering, and, in turn, it can receive some interesting new ideas about its nature and collocation with respect to philosophy of science.
ABSTRACT: ›Hide‹ Enactivism, as a paradigmatic position, has recently gained traction and resonance within cognitive science in general, and Artificial Intelligence in particular. While enactivism is associated with the concept of an extended mind and embodied cognition, its radical interpretation goes considerably further. I will argue that radical enactivism, when pursued rigorously, has the potential to harmfully afflict cognitive science in similar way as behaviorism did psychology.
12:30
13:00
Selmer Bringsjord and Naveen Sundar Govindarajulu Anders Sandberg Gagan Deep Kaur
Toward a Modern Geography of Minds, Machines, and Math
Feasibility of whole brain emulation
Being-in-the-AmI - Pervasive Computing from Phenomenological Perspective
ABSTRACT: ›Hide‹ This abstract provides a brutally quick overview of the project entitled ``___" at the ___ Lab at ___, funded by ___. This project is motivated by a series of questions; here we focus on but the following two:

Q1: What are the apparent limits of computational logic-based formal techniques in advancing explicit, declarative human scientific knowledge in various domains, and how can these limits be pushed/tested?

Q2: What have the fundamental, persistent difficulties of AI taught us about the nature of mind and intelligence, and how can these difficulties be overcome by logic-based AI (if indeed they can be)?

In a nutshell, our answer to Q1 is that in the realm of AI and computational linguistics, the apparent limit of our knowledge of human language (amply reflected in the fact that, contra Turing and his well-known prediction that by 2000 his test would be passed, we are unable to engineer a computing machine able to converse even at the level of a bright toddler) is fundamentally due to the fact that AI and cognate fields have not yet managed to devise a comprehensive logical system that can do justice to the fact that natural language makes use, sometimes in one and the same sentence, of multiple intensional operators. For example, English allows us to say/write and understand such recalcitrant sentences as: ``Jones intends to convince Smith to believe that Jones believes that were the cat, lying in the foyer now, to be let out, it would settle, dozing, on the mat outside." Were such a system in place, and implemented in working software, the human knowledge of human language would be advanced beyond the current limits on that knowledge.

Our equally brief answer to Q2: The difficulties of AI have taught us that beyond the challenge of rendering human language in computational terms, there is this lesson as well: Whereas the human mind (at least in the formal sciences) can routinely deal with concepts that are seemingly infinite in nature (e.g., transfinite numbers), standard computing machines are paralyzed by such concepts, and associated processes. For instance, while automated theorem proving has made impressive progress, that progress has been completely independent of proof techniques that for example make use of of infinite models and infinitary inference rules (such as the $\omega$-rule).

The project consists of five research thrusts that will flesh out our two answers; here we mention only three of these thrusts. The thrusts --- in the form, given present space limitations, of essentially nothing but suggestive labels --- are these:

T1. Multi-operator Intensional Logics. Here we strive to create logical systems that are sufficiently expressive to capture information in sentences in natural language that simultaneously employ operators for knowledge, belief, perception, ``tricky" conditionals (e.g., subjunctive conditionals), and self-consciousness.

T2. Toward Automation of ``Infinitary" Proofs. Here, we initially confine our attention to propositions that are independent of PA, and hence are examples of Gödelian incompleteness. For example, how might a computing machine prove Goodstein's Theorem? We are seeking to answer this question.

T3. Palette-Infinity Machines. Here we are specifying a cognitively plausible version of Kolmogorov-Uspenskii machines that have super-Turing capability. (See [1] for Kolmogorov and Uspenskii's specification of their machines and [2] for a recent readable introduction by Smith.)

We will describe recent, dramatic extensions to our cognitive event calculus (CEC), which is used in T1 above. The CEC is a logical system that has been used to solve many complex problems, such as the false-belief task and arbitrarily large versions of the wise-man puzzle; and it's currently being used to simulate the task of mirror self-recognition, and also to model certain economic phenomena. (We leave citations aside to preserve anonymity.) We will present at the conference three broad extensions to the CEC that enable it to handle: time indices and nominals, different types of speech acts via communication that includes multiple modal operators, including those needed to model and computationally simulate so-called de se beliefs. As we will show, this reach for the CEC marks progress in T1, and hence progress on Q1.

At the conference itself, our presentation will include imbedded video demonstrations of progress made on T1, T2, and T3.
ABSTRACT: ›Hide‹ Whole brain emulation (WBE) is the possible future one-to-one modelling of the function of the entire (human) brain. This would achieve software-based intelligence by copying biological intelligence (without necessarily understanding it).

WBE is interesting for several reasons.

· It is the logical endpoint of computational neuroscience’s attempts to accurately model neurons and brain systems, and the emergent dynamics that occur in such models.

· There are drivers for WBE such as neuroscience research, neuromorphic engineering and brain-computer interfaces.

· Attempts at brain emulation would itself be a test of many ideas in the philosophy of mind and philosophy of identity.

· The economic impact of copyable brains could be immense, and could have profound societal consequences (Hanson, 1994, 2008b). Even low probability events of such magnitude merit investigation.

WBE represents a formidable engineering and research problem, yet one that appears to have a well-defined goal and could, it would seem, be achieved by extrapolations of current technology. This is unlike many other approaches to artificial intelligence where we do not have any clear metric of how far we are from success.

This paper will explore the feasibility of WBE, investigating what preconditions – philosophical, scientific and technological - are necessary for various degrees of success.

Philosophical Issues

A first fundamental issue is whether emulations of chaotic systems are meaningful? Given that the brain almost certainly contains chaotic dynamics, the state of an emulation will diverge from the state of the original quickly. This is related to the distinction of an emulation (similar causal structure) and simulation (similar observable output).

Physicalism: WBE assumes that everything that matters in brains supervenes on the physical. WBE also makes similar assumptions as strong AI about the philosophy of mind.

A key assumption, characteristic of the WBE approach to AI, is non-organicism: total understanding of the brain is not needed, just understanding of the component parts and their functional interactions. In normal science top-level understanding is seen as the goal, with detail understanding merely a step towards the goal.

Can meaningful degrees of success be defined and observed? An attempted emulation might be able to produce brain states that are not functionally unified into meaningful behavior. It might also produce species-typical behavior rather than behavior linked to the properties (e.g. memories) of the individual brain on which the emulation was based. An emulation that exhibits these individual traits might still fail at being a mind emulation (it lacks mental properties), person emulation (it lacks necessary aspects of personal continuity) or a social role-fit emulation (it cannot fit into the social identity of the original brain).

Scientific Issues

This group of issues deals with the physical properties of the brain and the possibility of humans inferring enough information about them to achieve WBE. It also includes the methodological question of how a WBE research program could be implemented so as to approach a successful emulation over time.

A key issue is what level of detail of understanding the brain is needed. This is closely tied to size scales: a higher level of detail typically requires gaining neuroscientific information on smaller scales, requiring new modalities of measurement. High resolution scanning also produces more information, requiring more storage and processing. The fundamental approach of WBE is that it trades high-level understanding for brute force requirements.

Scale Separation is a key challenge for the WBE project. Does there exist a level where interactions on shorter length and time scales average out, producing dynamics largely uncoupled from the dynamics on smaller scales, or are each scale strongly linked to larger and smaller scales? If no such scale separation exists, then the feasibility of WBE is much in doubt: while a perfect copy of a brain would achieve WBE, no method with a finite cut-off would achieve emulation. The existence of scale separation is both a fundamental requirement of WBE and a practical problem (for finding the optimal resolution of the model), as well as an intriguing scientific problem.

Computability: WBE assumes that brain activity in large is Turing-computable. A related challenge is component tractability: can the simplest components simulated be understood and measured? For example, if the quantum-mind proposals of Hameroff et al. were true, the relevant components might be quantum states that cannot be measured even in principle, even if their dynamics were known.

Brain-centeredness: we do not need to simulate the entire body beyond an adequate level. Clearly bodily states are necessary for perception and action, since the brain’s interaction with the environment is mediated by a body transducing neural signals. Bodily states also influence brain states directly and can contribute content. Hence some aspects of the body need to be part of the emulation framework.

Analysing the potential of the WBE project also involves estimating the number and complexity of biological modalities that need to be modeled. Some issues such as whether dynamical state, the spinal cord, volume transmission or glia cells need to be included can already be estimated with some precision. Known unknowns such as the number of neuron types, neurotransmitters or relevant metabolites can be bounded. The interesting challenge is issues of assessing unknown unknowns, such as whether there exist entirely new forms of interactions in the brain.

Technological issues

This group of issues deals with the technological feasibility of scanning brains and emulating them.

The challenge of simulation tractability is whether simulation at the level set by scale separation can be done on a realizable computer. This might be fundamental (if the brain components are doing uncomputable operations) or practical (there will not be enough computing power available in the future to achieve meaningful WBE). A related issue is whether scanning methods for the necessary level of detail are realisable (or ethically acceptable).

Given current neuroscientific and technological knowledge there doesn’t seem to exist any fundamental obstacles. However, extrapolations of technology and neuroscience are untrustworthy, especially given the possibility of foundational objections. The scale separation issue might provide a fruitful empirical way of testing the feasibility of WBE in the near future, with relevant implications in philosophy of mind and neuroscience.
ABSTRACT: ›Hide‹ This paper seeks to explore some ontological issues in the scenario of pervasive computing which is, if not full, then partial reality at present. It is a modest attempt at exploring what might be the status of ubiquitous artifacts, the technology as well as the users who are embedded in this ambient intelligence. For Mark Weiser (1991), the founder of pervasive computing, the sign of the “profound technology” is its “disappearance”; its physical invisibility in the form of its embeddedness in the objects of everyday use. The Weiserian profundity of technology in its physical disappearance can be contrasted with Heideggerian insistence on withdrawal whereby the equipment gains its efficiency by becoming transparent in the everyday activity of Dasein. In latter, the equipment does not disappear by virtue of its materiality, but rather by its embedding in the skillful activity of the Dasein. Even in breakdown, it does not disappear from the gaze of the user as such, rather in that, the tool discloses itself as an object, a presence-at-hand, up for the theoretical reflection of Dasein. Withdrawal is from the cognitive awareness of the user, whereas disappearance is vanishing into the background whether or not the user is aware. Whereas Weiser’s insistence is on physically vanishing the tool by embedding it in objects, things, buildings, furniture etc, Heidegger’s effort is to disclose how tool withdraws from the gaze of the user when it is used. An attempt has been made to elucidate this difference in Section-1.

The technology which aims for disappearance, rather than transparency, may be questioned for its perusal since it raises important ethical issues in its wake. In the invisible technology, the disappeared artifact that put no demand on the user tends to make user passive in its circumspective dealings with it. In such a scenario, ‘user’ might end up being ‘used’ by the technology itself, or rather, abused or even misused without the awareness of the user. Since the ethical concerns regarding user’s privacy have already been raised (Michelfelder, 2010), this paper would be confined to exploring status of this technology in Section-2. How might the pervasive computing be theoretically thought about? A ‘ready-to-hand’ technology or the invisible background of all existential activities of human beings? What might it mean to use this vanishing technology? At present, users enjoy an alterity relationship with the techno-artifacts – not in the sense of Don Ihde, wherein artifact, like a spinning top which once start spinning “takes a life of its own”, but in the sense of being other that can be encountered as not-me. Distinct materiality is precondition of distinguishing what’s part of me and not-me. Once artifacts receding into the background, they cease to approach us as themselves. This distancing between user and artifact further is a blessing as it enables user have a critical stand on the artifact - avoid and let go if the user so desire which gives them an edge over technology. With the vanishing of this distance in ambient intelligence, the user is anticipated to get merged into the inescapable matrix of the technological surround leading to loss of her autonomy of dealing with the technology. On the one hand, this loss of alterity of the artifact lead to loss of artifact’s disclosing itself in breakdown (Araya, 1995), on the other, more fundamentally, it alters the ontological status of environment too by becoming enmeshed with it. It alters the way environment challenges our ways of acting by making it ceasing to surprise us. Environment, from the existential background of human beings, become a mute reservoir of the means to further user's ends - a reservoir which keeps on providing its services without prompted and which keeps on feeding itself with our informative trails without asking. As an existential setting, environment exist as a demanding backdrop, a backdrop that not only makes be-ing of Dasein possible, but also challenges hers wits and guts to be understood and tampered with. As reservoir of our means-to-ends quietly feeding us with our needs, environment ceases to be a challenging background. It ceases to surprise us and therefore posing the need to be understood. User is anticipated to be distanced from its own existential background in this scene.
13:00
13:30
Darren Abramson Tijn Van Der Zant, Matthijs Kouw and Lambert Schomaker Roman Yampolskiy
Untangling Turing's Response to Lady Lovelace: The Turing/Ashby Dispute
Generative Artificial Intelligence
What to Do with the Singularity Paradox?
ABSTRACT: ›Hide‹ In this paper, I identify a debate between Turing and a contemporary of his, and show two responses to Lady Lovelace's objection. I show that W. R. Ashby held a mistaken view concerning why general purpose computers could not think. Furthermore, I show that, contrary to analysis of Ashby's notebooks by recent commentators, Ashby persisted in his mistaken view, which Turing responded to without mention of Ashby's name in "Computing Machinery and Intelligence" (1950). New clarification of this disagreement, and my argument showing Turing's vindication, constitute an argument against the contemporary dynamical systems hypothesis in artificial intelligence.
ABSTRACT: ›Hide‹ Traditionally science is about analysis. Science analyzes in order to understand, that is, so that humans understand. Subsequently these humans try to formalize the analysis by creating models, create text books on the subject and a new breed of scientists can try to understand the mysteries of the previous generation [Kuhn, 1962]. This is, in short, the analytic phase of science that has dominated scientific thinking since the advent of science.

The formation of applicable new structures in existing matter and energy is one of the most im- portant outcomes of science. Depending on the point of view one could say that scientists are not creating anything new, but are discovering new arrangements of matter and energy that can be used for purposes that seem useful to humans. Deleuze and De Landa call this the tracking of the ’machinic phylum’, which is a lineage based on singularities, also known as bifurcations. A singularity is a place in phase space where there is uncertainty about the path to be followed, where a bifurcation might happen. Depending on both internal and external factors the path is chosen. A small change in a bifurcation parameter can cause a sudden change in the behavior of the system...
ABSTRACT: ›Hide‹ The paper begins with an introduction of the Singularity Paradox, an observation that: “Superintelligent machines are feared to be too dumb to possess commonsense”. Ideas from leading researchers in the fields of philosophy, mathematics, economics, computer science and robotics regarding the ways to address said paradox are reviewed and evaluated. Suggestions are made regarding the best way to handle the Singularity Paradox. Finally future directions for research are suggested.

13:30-14:30 Lunch

14:30-16:00 Invited Talks

 Session A
(New Building, Conference Room)
Session B
(Bissell Library, 2nd floor)
 Chair: MiłkowskiChair: Bringsjord
14:30
15:15
Mark H. Bickhard Brian Cantwell Smith
What could cognition be ... if not computation, or connectionism, or dynamic systems? The Fan Calculus
Ontology for the Future of AI
ABSTRACT: ›Hide‹ Despite the fact that representation is at the center of AI and cognitive science, there is still no consensual model of what representation, thus cognition, is. I argue that there are good reasons for this impasse: none of the approaches to modeling representation that are currently on offer is ultimately viable. I outline several of the deepest reasons for this conclusion, and offer an alternative, pragmatism based, approach to modeling representation and cognition.
ABSTRACT: ›Hide‹ Identity is classically taken to be an intrinsic property of objects. Real-world ontology is much more complex, and fluid. Typologies of type, instance, and use falter in the face of programs, documents, information. Sometimes we recognize ontic profusion, such as when we distinguish works, versions, copies, editions, translations, translations of editions, etc. Sometimes -- but not always -- we conflate things that are isomorphic (numerals and numbers, entities and their models, photos vs. copies of photos, names vs. named, a file in memory vs. a file on disk vs. "the same file" on another disk, etc.). At other times we use context to discriminate ("the chair is usually an anthropologist" vs. "the chair is retiring in June"). Always to draw all possible ontological distinctions is pedantic in the extreme, yet to fail to draw distinctions appropriately can lead to spectacular confusion. Intelligence requires sophisticated ontic skill: making discriminations, and committing to identity conditions, in ways appropriate to the task at hand.

A calculus is being designed, called the "fan calculus," in which identity (including identity of the elements of the calculus itself) is taken not to be intrinsic, but to be a perspectival matter of how that entity is referred to. A gesture of the calculus will be presented, with a discussion of its metaphysical and epistemological motivation, its potential uses, and challenges that face its technical development.
15:15
16:00
Antoni Gomila Oron Shagrir
Wherein is human cognition systematic? Computation, Implementation, Cognition
ABSTRACT: ›Hide‹ One of the strongest arguments in favour of a classical, cognitivist, cognitive architecture (one based on language-like mental symbols and inferential processes over them), was the systematicity argument (Fodor, 1975; Fodor & Pylyshyn, 1988). According to such argument in outline, (i) cognition is systematic; (ii) the only remotely plausible explanation for systematicity is a cognitivist architecture. Classical Artificial Intelligence provided an important "existence proof" for such an approach. For many years, connectionism tried to meet this challenge by resisting (ii), i.e., trying to show that connectionist architectures are also able to account for the systematicity of human cognition. The debate that ensued turn on the question on how to conceive of the constituent structure of cognition, which both sides assumed as required by the systematicity argument.

In this contribution, we want to take issue with the first premisse: that cognition is systematic, in the first place. We agree that human cognition is sometimes systematic; but systematicity cannot be taken as a defining feature of cognition per se. Hence, neither classical cognitivism nor connexionist cognitivism provide the right approach for a general approach to cognition, while they may be needed to account for some special features of human cognition. Embodied approaches, such as dynamical systems theory and ecological perception, have already proved more promising to understand cognition from the bottom up (Calvo & Gomila, 2008).

In order to challenge the assumption that all cognition is systematic, we will first discuss how the notion of systematicity is to be understood, and then present a battery of arguments against their generality:

i) it is assumed that non-linguistic beings, such as prelinguistic infants or non-human primates, exhibit systematicity, but the empirical evidence for such a claim is missing;

ii) in fact, the evidence rather indicates that these basic cognitive beings do not exhibit systematicity; the higher order variables to which cognitive being may be sensitive to, as been shown by ecological perception, are not up for graps for inferential/computational transformation, but control action in a task-specific way;

iii) all the evidences that cognition is systematic are linguistic, which suggests that it is a linguistic-dependent feature, rather than the other way around (linguistic systematicity as derived from cognitive systematicity, as per LOT);

iv) even in those cases, systematicity is not as clear cut as defined, because of the phenomenon of context-dependency (a pragmatic one), which makes it non-algebraic; hence, structure cannot be the whole story in this regard;

v) cognitive systematicity follows linguistic systematicity in development, rather than the other way around; we will focus on the case of the complement structure, and the evidence that its syntactic acquisition precedes propositional attitudes attribution (de Villiers & de Villiers, 2009).

On these grounds, it will be argued that symbolist approaches in Cognitive Science invert the order of dependency to account for the isomorphy between language and cognition. Instead of assuming an architecture of symbols and rules as the basics of cognition, such symbolic level seems to be a higher level of organization made possible by language, whose semantics can in this way be grounded in the more basic level of cognitive architecture. Thus, we defend a version of the "dual-theory" of cognition (Carruthers, 2005), but instead of assuming that the basics of cognition is a classical architecture, we propose that the basic level can be better understood in dynamical, interactivist, sensorio-motor, terms. Such basic level of cognition is not systematic in the way specified by the challenge, and it is wrong to try to look at this basic level of precise, discrete and stable enough states to be called symbols that correspond to the meanings of words. Language makes possible the appearance of a new level of cognition, which seems distinctively human, up to this point.
ABSTRACT: ›Hide‹ David Chalmers articulates (1993/2012), justifies and defends the computational sufficiency thesis (CST). CST states “that the right kind of computational structure suffices for a possession of a mind, and for the possession of a wide variety of mental properties." Chalmers addresses claims about universal implementation, namely, that (almost) every physical system implements (roughly) every computation (Putnam 1988; Searle 1992). These claims seem to challenge CST: if every physical system implements every computational structure, then (if CST is true) every physical system implements the computational structure that suffices for cognition ("a possession of a mind"). Hence, every physical system is a cognitive system. If CST is true, in other words, then rocks, chairs and planets have the kind of cognition that we possess. Chalmers argues, however, that the antecedent of the first conditional (i.e., universal implementation) is false, and he offers a theory of implementation that avoids the pitfalls of universal implementation (1996; see also Scheutz 2001). This theory, according to Chalmers, provides solid foundations for artificial intelligence and computational cognitive science. I can only praise Chalmers’s theory of implementation; I think he successfully shows that universal implementation claims are false. But I argue that Chalmers’s theory of implementation does not block a different challenge to CST. The challenge, roughly, is that some possible physical systems simultaneously implement different computational structures (or different states of the same computational structure) that suffices for cognition; hence these systems simultaneously possess different minds. Chalmers admits this possibility elsewhere (1996). But I argue that it is more than just a remote scenario, and that it renders CST less plausible.

16:00-16:30 Coffee

16:30-18:30 Sections (4 x 3)

 Session A
(New Building, Conference Room)
Session B
(Bissell Library, Teleconferencing Room)
Session C
(Bissell Library, 2nd floor)
 Chair: GrosChair: MüllerChair: Anderson
16:30
17:00
Pierre Steiner Slawomir Nasuto and Mark Bishop Oscar Vilarroya
C.S. Peirce and artificial intelligence: historical heritage and theoretical stakes
Of (zombie) mice and animats
The Experion Conjecture. A Satisficing and Bricoleur Framework for Cognition
ABSTRACT: ›Hide‹ ‘Precisely how much of the business of thinking a machine could possibly be made to perform, and what part of it must be left for the living mind, is a question not without conceivable practical importance’

Peirce, New Elements of Mathematics, vol.II. t.1, p. 625.

We are currently witnessing an important turn in the understanding of the nature of cognition: so-called “4-E cognitive science”, by insisting on the manifold ways in virtue of which cognitive processes (perception, reasoning, memory,…) are embodied, embedded, enacted, and especially (and possibly) extended, invites us to reconsider the fundamentals of our implicit ontology concerning the place and the character of cognition. This opens new challenges for AI (Froese & Ziemke, 2009), but that can also lead us to understand differently how much our own intelligence is artificial.

Indeed, if we argue that human cognitive processes may notably be extended or distributed across the manipulation and production of artifactual structures – from symbols, spoken words, maps and beads of abacuses to cell phones, PDAs, computers, and the World Wide Web –, then we have to acknowledge that maybe right from the start (i.e. the emergence of our humanoid ancestors), human intelligence is made of artifacts: technological evolution and cognitive evolution go hand in hand. That remark sets the agenda for a better understanding of the various ways by which the production and use of informational and expert systems – as designed by empirical AI – modify our cognitive practices. But not only. Indeed, in the following, I would like to pursue one theoretical stake of the idea that human intelligence is artificial right from the start, in relation with the philosophy of Charles Sanders Peirce (1839-1914), the founder of pragmatism – not to mention here the other crucial contributions of Peirce to artificial intelligence, in terms of semiotics, graphs for first-order logics (see Tiercelin (1995), Sowa (1984), Bolter (1991))….

The first section of the paper reminds how much Peirce, in his reflections on the relations between cognition and tools, anticipated the most crucial insights of extended and distributed cognitive science, including the realization of cognitive processes in the workings of technical artifacts such as reasoning and logical machines, the precursors of computers. In the second section of the paper, I present Peirce’s reflections on the conditions according to which thinking machines really approximate (or not) human reasoning. The third and concluding section presents a paradox that arises out of Peirce’s considerations presented in the former sections, and that may still be found in contemporary reflections on the status of artificial intelligence, if one endorses the idea that human cognition is made out of artifacts.

(1) Very early on, Peirce defended the idea that human thought was notably made of tools and instruments and, conversely, that instruments and tools could be parts of mind. Already in 1877, Peirce for instance argued that Lavoisier’s alembics and cucurbits are, in a literal sense, instruments of thought; their use makes reasoning first something done “with one's eyes open, in manipulating real things instead of words and fancies” (Collected Papers, 5.363). In an 1887 paper named « Logical Machines », Peirce also remarked that the unaided mind is limited in many respects, while « the mind working with a pencil and plenty of paper has no such limitation ». In other places of his abyssal work, Peirce willingly underestimated the importance of the brain in the existence, the workings and the localization of mind, be it a set of cognitive activities or of more abstract thought (7.366). Peirce wanted to show the absurdity of the reduction of mind to neural processes, but not without arguing against some distributed localization of mental activities, provided a special sense of localization. This (distributed) localization is virtual localization. (I will define it in my talk). According to it, the activity of language, for instance, is virtually localized in the use of brain, tongue, written symbols and inkstand (Skagestad, 1999; Skagestad, 2004). Inferring, calculating, reflecting, or controlling one’s own thought are all distributed across external symbols, logical rules and instruments, and possibly logical and reasoning machines – beside their role in cognizing, the latter ones embody operations of mind for a second reason: they process signs according to logical rules. Mind, for Peirce, is semiotic in its structure. There is thought without language; but all thinking process is made up of signs that stand for some object for someone or for some further thinking process: another sign, itself standing for some object, for someone or for some further process, and so on ad libitum. Reasoning machines have that structure; they include and are signs that are addressed to something/someone in a never ending process.

(2) Peirce paid lots of attention to what logical and reasoning machines (like Babbage’s, Jevons’ or Marquand’s) can teach us about reasoning: as we do, they follow rules and they exhibit abilities of synthesis when they compute results. They “grind syllogisms” (2.59). Since, moreover, Peirce originally considered that human thinking has nothing to do with the presence of some self, some intuition or some phenomenal consciousness, we can see how much Peirce thought that machines were close to men in terms of reasoning abilities (in addition to their cognitive nature, as seen above). The human mind, according to Peirce, is only special in virtue of the high degrees of self-control and self-correctiveness it can exercise on conduct, on the basis of norms and principles. Machines, pace Peirce, can exercise self-control and self-correctiveness on their own operations, but not deliberately and purposively, on the basis of revisable aims, goals and principles, while human self-control is deliberate, purposive and endless, in terms of reflexivity (5.442). This is not surprising for Peirce, since these limitations are at the basis of what machines are (for us) and of what they were made for.

(3) Undoubtedly, (1) and (2) can be found in many essays, papers and treatises that have been published during the last fifty years, on cognitive science generally (for (1)), and on artificial intelligence as a scientific enterprise especially (for (2)). It is interesting to note how much Peirce was anticipating these ideas, but not only. Indeed, an idea is striking if we consider (1) with (2) when they are defended by the same writer (here: Peirce): if we want to distinguish between human intelligence and machine intelligence not by mentioning consciousness (as Peirce decided to do, in his criticism of Cartesian philosophy), but the degrees of control, purpose, and reflexivity machines can exhibit (as in 2), and if we consider that human intelligence – including how we acquire and exercise self-control and reflexivity – is technically constituted (as in 1), then we will have to definitely give up the quest for some non-technical factor(s) that would make human intelligence radically different from the abilities exhibited by machines (since the latter one are also artefactually made). To put it otherwise: from that perspective ((1) with (2)), the only way for machines to approach human intelligence – in terms of reflexivity and control – would be for them to be able to off-load some part of their architecture or cognitive powers (and their products) into environmental artifacts and other machines, whose use would then allow them to acquire and to exercise new cognitive powers.
ABSTRACT: ›Hide‹ “Suppose that a team of neurosurgeons and bioengineers were able to remove your brain from your body, suspend it in a life-sustaining vat of liquid nutrients, and connect its neurons and nerve terminals by wires to a supercomputer that would stimulate it with electrical impulses exactly like those it normally receives when embodied.”1

The Chinese Room Argument (CRA), in spite of the controversy it generated, remains a hallmark argument in the debate over the possibility of instantiating mind in computing devices. In its most basic form it addresses the most radical version of the claim as proposed by good old fashioned Artificial Intelligence (GOFAI). Many scholars do not agree with its claims or at least try to suggest frameworks which could circumvent its conclusions. One such area purported to escape CRA argument is cognitive robotics. The hope of its proponents is that by providing a physical body, computational operations are married with recently fashionable areas of embodiment and enactivism and by virtue of the latter the CRA argument fails to apply.

However, the initial extensions of the basic CRA first proposed by Searle in the original CRA paper explicitly addressed such a ‘robot reply’. In accord with Searle’s reply we also conclude that if we were to attribute genuine mental states/intentionality to such a computationally driven robotic device, we would have to do so also for any modern car equipped with sensors and computer on board.

Some cognitive roboticists concede that although current robotic platforms have been too impoverished in terms of their sensory surface, it is merely a matter of providing them with more sensors to achieve genuine intentional states; conversely we suggest that as long as their efforts are directed towards improving/enriching grounding meaning in the external world (whilst neglecting the need for concomitant grounding in internal states) all that such devices will achieve are merely ever more sophisticated reflections of the relational structure of the external world in the relational structure of their internal formal representations.

As various CRA variants elaborate, the precise nature of operations needed for the construction of internal representations [or means by which a mapping between external and internal relational structures is achieved] is irrelevant.

Similarly the question of whether symbolic computational, sub-symbolic connectionist, or continuous dynamical system approach should be adopted, translate into the questions of formal richness of the internal relational universe or the mathematical nature of the mapping between external and internal relational spaces. Although very important considerations delineating some of the important properties of the cognitive states, they pertain ‘only’ to necessary aspects of intentionality related to nature of regularities in the external world (continuous and statistical or symbolic and recursive) and the best formal means to extract and manipulate them; they do not reference and remain ungrounded in Searle’s own internal bodily states.

The above considerations, important as they are, are clearly insufficient to fully ground intentional states; as for example Searle in CRA would painfully be aware if the CRA experiment were actually conducted. The demonstration would be very simple, if cruel, as all that would be needed is to lock the door and wait; soon enough, as the monoglot Searle remains unable to communicate his bodily needs to the outside world in Chinese, the CRA (or Searle to be precise) would be no more. Ironically, the inability of communicating about own internal states can be contrasted with perfect ability (ex hypothesi) to communicate about internal states of Chinese interlocutors; they are external states to Searle after all.

The true intentionality can only arise in systems which ground meaning jointly - respecting external constraints as well as internal states - a situation which, as CRA illustrates, is impossible to achieve by a computational (or in fact any mechanistic/formal) system as they have no such physiological states at all.

What is closely related is that even though formal systems (even those instantiated in a robotic device) may be in principle rich enough to reflect the complexity of the relational structure of the external world, there is nothing in their constituent structures that will make them do so; or do anything at all for that matter. This is because of their very nature – abstraction of any mechanistic rule or formalism from any system that instantiates it. For example what symbolic operations are should be invariant to the means by which they are accomplished. Thus, there is nothing that inherently compels an artificial agent to do anything, to perform any form of formal manipulation that could help it to map out the regularities of the external world. All [Turing-machine powered] robotics can hope to achieve is, to paraphrase Dennett, a weak form of `as-if' autonomy and `as-if' teleology; which in reality merely reflects their engineers' designs and their end-users' wishes.

In contrast, real cognitive agents have internal drives at all levels of organisation – survival, metabolic and physical -that make them act in the world, make them react to the external disturbances (information) and manipulate it in such a way that they will support immediate and delayed fulfilment of the drives at all levels. Such manipulation of the information is intentional as it is tantamount to the biological, biochemical and biophysical changes of the real cognitive agents’ biological constituents which are intrinsically grounded (they have metabolic, physiologic or survival values).

The intentionality comes not only from the potential mapping between the relational structures of the external world and the states of biological constituents; their changes that come about as a result of external disturbances (which under such mapping correspond to information manipulation) are also intrinsically grounded as they follow physical laws and do not come about merely for the symbol manipulation’s sake. Systems which are based on formal manipulation of the internal representations are thus neither intentional nor autonomous as no manipulation is internally driven nor serves intrinsically meaningful purpose other than that of system designer’s.

But the story does not end with robotic systems controlled by Turing-machines alone. In the last decade, huge strides have been made in ‘animat’ devices. These devices are robotic machines with both an active neurobiological and artificial (e.g. electronic, mechanical or robotic) component. Recently one of the co-authors, with a team from University of Reading, developed an autonomous robot (aka `animat') controlled by cultures of living neural cells, which in turn are directly coupled to the robot's actuators and sensory inputs. Such devices come a step closer to the physical realisation of the ‘brain in a vat’ thought experiment.

It is not so obvious that the potential of ‘animat’ devices (for example, to behave with all the flexibility and insight of intelligent natural systems) is as constrained by the standard a priori arguments purporting to limit the power of [the merely Turing machine controlled] robots highlighted earlier. Even if one accepts potential objections to animats’ capacity for intentionality based on envatment (Cosmelli, Thompson, 2006) other experiments go step further and create means of influencing in vivo the brain’s control over the body, creating in effect a (mouse) zombie2. Nevertheless, this paper will argue that neither animats not zombie mice will escape Searle's CRA, which we suggest continues to have force against claims of their symbol-grounding, understanding and intentionality.
ABSTRACT: ›Hide‹ The aim of this communication is to present an approach to the notion of experion which was informally, but extensively, presented elsewhere (Vilarroya 2002). The proposal presents an alternative framework to the classical characterization of cognitive systems as internally representational systems, with a compositional syntax and semantics. The notion of experion focuses instead on the embedded and embodied coupling between a system and its environment, therefore assuming no divide between internal and external compartments. It is precisely in the context of these couplings, and in the way they are registered, that cognition will be characterized. Additionally, the notion of experion will be employed to build up a cross-level model of a system as a cognitive system, providing a systematic way to approach each level of analysis of the system.

My approach parts from the fact that natural cognitive systems have evolved to characterize and act in the situation in which they are involved, and exploits evidence and constraints revealed by evolutionary neurobiology. These preliminaries are combined with insights from other lines of research, such as empirical theories of perception, perceptual-based theories of concepts, sensorimotor contingencies research, situated cognition, cognitive linguistics, and embodied robotics. The proposal extracts from these trends what I consider to be insights on how a cognitive system works and how to model such a system to account for cognition. Yet, it cannot be accommodated in any of these lines of research alone, nor is it a compendium of their theses; in my view, it stands alone with original tenets of its own.

The notion of experion will be defined as a “system controlled event within which a number of systemic-environmental contents are created that help in dealing with the individual’s adaptive topic at issue.” By event I mean particular temporally-bounded situation. By system controlled I assume that it is the system which establishes the beginnings and endings of the event, and monitors its evolution. By systemic-environmental I consider some element or property that can only be understood as extended in the coupling that a particular system establishes between itself and a particular environment. The systemic-environmental qualification implies that the contents of an experion can only be characterized in the specific interaction between the particular system and the particular environment, and also that is specifically configured in the context of the properties acquired by system-environmental couplings, namely, the fact that any such couplings has an adaptive topic. In sum, an experion is a sort of systemic-environmental state of affairs.

The specific nature of the experion contents and its ability to address the adaptive situation at issue are a product of the deployment of the relevant associations with previous registers of such couplings channeled through the basic operations of the systemic architecture. It is this property of readiness of the complete and rich past what provides the cognitive competence. This past is kept in the experion registers which dispositionally maintain the contents of previous registers and are able to reproduce such dispositions when necessary. In short, the system relies on recreating the experienced past to deal with the present, rather than relying on representations to understand and act on it. In this sense, the cognitive properties of a system must be viewed as the outcome of time and organization of particular experions, and not of specific pre-established (programmed) cognitive processes or states of the system.

The bottom line is that a cognitive system should not be seen as a system that represents reality, but a system that adapts to it, adjusting the agent to the environment in the best way to obtain its objectives: experiencing, and learning from it, by memorizing and transferring its relevant experiences. This allows us to identify what the mark of the cognitive corresponds to in extant systems, by considering a cognitive system any system that is capable of:

a) Experience: The capacity to establish a set of contents in the environment-system coupling which are relevant for the agent survival and reproduction.

b) Memory: The capacity to register the activity that preserves the contents of the coupling.

c) Association: The capacity to establish relevant connections among particular registers.

d) Learning: The capacity to modify new couplings by relevant registers.

Among many other things, this proposal establishes the basic principles that a AI architecture must comply to become cognitive, as well as the conditions for identifying cognition in any artificial system.
17:00
17:30
David Davenport Joscha Bach Zed Adams and Chauncey Maher
The Two (Computational) Faces of Artificial Intelligence
Some Requirements for Cognitive Artificial Intelligence
Artificial Intentionality: Why Giving a Damn Still Matters
ABSTRACT: ›Hide‹ Artificial Intelligence (AI) is primarily an engineering discipline that attempts to construct machines with human-like capabilities, but it is also a science that tries to understand how human cognition works. Unfortunately, this means that, "AI is an engineering discipline built on an unfinished science" [1]). The fact that the subject matter of this endeavour has traditionally been the realm of philosophy only complicates the issue. Not surprisingly then, throughout its short history---beginning with the Dartmouth conference in 1950---AI research has seen a lot of heated debates, most resulting from misunderstandings due to the differing goals, backgrounds and terminologies of those involved.

Today, there are two challenges (still) facing AI: doing it and, having done it, living with the consequences. The consequences include science fiction favourites: autonomous military killing machines (such as the "Terminator") able to wipe out humanity without a second thought, and revolts by armies of slave robots who feel they have been mistreated by their now inferior human masters (as in "I Robot"). Other, more subtle psychological and social consequences will likely result from the realisation that we humans are mere machines, and very limited biological ones at that, but discussion of these must wait for another day; first we have to show how AI is doable.

We can describe cognition at functional, computational & implementation levels [2]. The functional level provides the "big picture", what purpose/tasks the agent/entity has to achieve. How it might do these is the concern of the abstract computational level, while the implementation level provides a physical instantiation of the particular computational design. There are usually lots of viable alternative solutions at each level, but in the case of human cognition, the physical implementation is obviously constrained to the biochemical. Neuroscientists are collecting masses of data, some of which is being used to drive large-scale brain simulations [3], but what really interests us as AI researchers & philosophers is how this particular implementation looks and functions at the computational level. In other words, what is left once we abstract away all the biochemical jargon and leave an implementation-neutral explanation. Such an explanation will generally be in terms of causal and/or informational sequences.

Within the computational level there appear to be two fundamental and mutually-exclusive "organisational/architectural" possibilities---the so-called symbolic and connectionist paradigms. Our everyday digital (von Neumann) machines are clear examples of the symbolic form. It is also evident that we humans display symbolic characteristics [4, 5] (indeed, we designed digital computers around our own capabilities). On the other hand, the neuroscientific evidence suggests humans (and other animals) are of the connectionist ilk. If this is the case, we clearly have a gap in our understanding of how the brain works, since traditional connectionist---Artificial Neural Network (ANN)---models are not able to meet all the demands of the functional level.

This paper will reexamine the symbolic & connectionist concepts and the representational issues that surround them. Cognition is viewed as fundamentally embodied, embedded, interactivist, representational, computational, and primarily concerned with prediction/anticipation [6, 7, 8]. The symbolic paradigm is based on literally "copying" representations and combining them together to form more complex notions, whereas the connectionist approach is to combine by simply "linking" to existing representations. In computer programming jargon this is rather like parameter passing "by-value" vs. "by-reference". We suggest that (a) Newell's notion of a Physical Symbol System [9] can apply equally well to both symbolic and connectionist approaches, and (b) these are thus real alternative paradigms able to support the necessary computations. To provide the required systematic & productive capabilities, connectionist networks must either be recurrent or nodes must retain state [10, 11]. It should be possible to instantiate any number of "virtual" symbolic models on a connectionist machine, just as any number of connectionist models can be executed on a symbolic machine. That arbitrary models can be created and run as desired, is an important requirement for real intelligent systems [12].

The connectionist paradigm appears to offer the necessary and most suitable explanation of human-level cognition, consistent with its neural basis. Future AI research should thus focus on demonstrating the viability of this alternative on philosophical, theoretical, practical and neuroscientific grounds. If successful, fifty years from now the classical (von Neumann) digital computer may be a relic of history.
ABSTRACT: ›Hide‹ Artificial Intelligence is uniquely positioned to integrate the different disciplines of cognitive science within a single framework. But to do that, it needs to break free from the methodologism the field has settled into, and align itself along paradigms and research methods that are compatible with the goal of studying cognitive agency (or, more generally, the mind). Despite the disenchantment that has befallen the field of AI as a whole, several converging approaches have appeared recently, which aid in identifying a crucial set of requirements for the study of cognition within AI.
ABSTRACT: ›Hide‹ Many of us have a peculiarly intimate relationship with our iPhones. We bring them everywhere, all the time, and they play an essential role in how we navigate our environment, communicate with others, remember our experiences, and plan for the future. This and examples like it have been taken by many philosophers and cognitive scientists to illustrate the extended mind hypothesis (EMH) (e.g., Haugeland, 1995; Clark, 1997; Clark, 2008; Clark and Chalmers, 1998; Hurley, 1998; Rowlands, 1999; O’Regan and Noe, 2001; Shapiro, 2004; Wilson, 2004; Noe, 2004; Gallagher, 2005). According to EMH, the mind is a tightly coupled system, in which all of the components of this system are parts of the mind, including those that extend beyond the body, such as our iPhones.

Not everyone finds such examples compelling (e.g., Adams and Aizawa, 2001; Grush, 2003; Rupert, 2004). In brief, these critics allege that not all coupling is constitution. To maintain otherwise is to commit the “coupling-constitution fallacy” (Adams and Aizawa 2010a, 2010b). Clark may be the the most prominent advocate of EMH, but as he admits (2008a, 37-38; 2008b, xxvi-xxvii) the core argument for EMH is due to Haugeland (1995). Yet both Clark and critics of EMH have failed to recognize that Haugeland saw EMH as part of a larger view about intentionality. That larger view essentially involves something Haugeland called “existential commitment.” Roughly speaking, existential commitment is the willingness to stake one’s identity as a knower on the success or failure of one’s ability to skillfully cope with the world.

In this paper, we argue that existential commitment holds the key to addressing the charge that EMH commits a coupling-constitution fallacy. We grant that EMH cannot be supported simply by showing that the central nervous system is sometimes tightly coupled with equipment beyond the body. Rather, EMH makes sense because a genuinely intentional system must take responsibility for the proper functioning of its cognitive equipment. The mind includes the equipment it uses to skillfully cope with the world because it takes responsibility for the proper functioning of that equipment. In this sense, it’s not tight coupling itself that matters for identifying the parts of the mind; it’s responsibility for the proper functioning of those parts that matters.

What are the implications for artificial intelligence? We side with Haugeland in thinking that understanding how to implement intentionality in a system requires understanding what it is for that system to give a damn about its proper functioning.
17:30
18:00
Naveen Sundar Govindarajulu and Selmer Bringsjord Istvan Berkeley Antoine Van De Ven
Towards a Mechanically Verifiable Non-justificatory Proof for the Church-Turing Thesis
Machine Mentality?
Generative Models and Consciousness in Humans and Machines
ABSTRACT: ›Hide‹ The Church-Turing Thesis underlies many philosophical arguments in theories of mind and artificial intelligence (See for e.g. Lucas’ [2] and Penrose’s [3]); knowledge of either its truthfulness or falsity will have ramifications for not only the philosophical debates carried out in AI but also for many of the fundamental assumptions in AI (See Bringsjord’s [1]).The CTT also holds such equal importance in philosophical and foundational issues in AI-related fields, mainly: its native field of logic and computability theory, complexity theory, hypercomputation, physics and the physics of computation. Given its relevance to foundational issues in a wide variety of fields including AI, there have been quite a few attempts to settle it one way or another. Particularly, there have been a spate of recent projects which purport to prove the CTT. Unfortunately, none of these projects stand up to scrutiny demanded of mathematical proofs. We will detail these attempts in the paper, and also present a project that seeks to overcome the failings of past approaches.

Our paper will encapsulate a concise report on why the recent approaches fall short of proving the CTT. First and foremost, the proofs presented are not verifiable. Second, the proofs fail to make a distinction or conflate what Shagrir terms in [4] as justificatory and non-justificatory versions of the CTT (and also Turing's Thesis and Church's Thesis). Briefly, a proof for the justificatory version of CTT would require a justification of the effectiveness conditions while a proof for the non-justificatory version of CTT will not require such a justification. In a proof for the non-justificatory version of CTT, one just stipulates the effectiveness criteria and then proceeds with the proof.

We view the CTT as making a claim about what can be effectively computed by cognitive agents or human computers acting in a mechanized fashion. Turing intended for one such set of conditions stipulating how such agents might act; one version of these conditions, reported concisely by Shagrir, are (quoted from Shagrir’s [4]):

1. The immediately recognizable (sub-) symbolic configuration determines uniquely the next computation step.

2. There is a fixed bound on the number of symbolic configurations that can be immediately recognized.

3. There is a fixed bound on the number of states that need to be taken into account.

4. Only immediately recognizable symbolic configurations are changed. 5. Newly observed configurations are within a bounded distance of an immediately previously observed configuration

The above conditions require that any formalization of the CTT should address how such agents and human computers might behave. Our proof will be in the cognitive event calculus (the CEC), a multi-operator intensional logic built on the event calculus that has been used to analyze and predict the cognition and behavior of multiple agents acting in dynamic worlds.The CEC has been successfully used in problems such as the false-belief task and arbitrarily large versions of the wise-man puzzle; and it’s currently being used to simulate the task of mirror self-recognition, and also to model certain economic phenomena. (We leave citations aside to preserve anonymity.)

We seek to prove the following two sentences corresponding, respectively, to the easy and hard directions of the CTT:

∀m, i, o. T (m, i, o) → ∃p. E(p, i, o)
∀p, i, o. E(p, i, o) → ∃m. T(m, i, o)

Roughly, the formula T(m, i, o) denotes that the Turing machine with m as its program code, operating with i as input, halts with o as output. The formula E(p, i, o) denotes that a human computer following a set of efective instructions p starts with i and ends with result o. Definitions of T and E via axiomatic formalizations will be given in the paper. Axiomatizing T , though straightforward, is tedious, and our requirement for the proof to be machine checkable dictates that the formalizations be efficient enough to allow a machine-verifiable semi-automated proof.

We axiomatize E by modeling the above conditions 1-5 in terms of what an agent perceives, believes and knows at any given time. The information that the human computer perceives, believes and knows at any given time dictates the action taken by the computer in the next time step. We also stipulate bounds on what and how much the agent can perceive, believe and know at any given time; this is not straightforward and requires some logical machinery. We also need to describe the workspace that the agent operates in during the calculation and the calculational actions permitted.A human computer acting mechanically cannot move to arbitrary regions of the workspace in consecutive time steps; this is axiomatized by providing a structure in the workspace that lets us calculate distances between pairs of points in the workspace. (Readers may note parallels in the above paragraph with issues in logical omniscience.)The effective computation carried out by the agent is formalized as planning and acting by the agent.

A machine-verified proof of the CTT that is semi-automatically generated will be presented at the conference.
ABSTRACT: ›Hide‹ A common dogma of cognitive science is that cognition and computation are importantly related to one another. Indeed, this association has a long history, connected to older uses of the term 'computer'. This talk begins with a brief examination of the history of the association between computers and putatively thinking machines. However, one important place where the modern sense of this association is made explicit is in Turing's (1950) paper “Computing Machinery and Intelligence”, where the infamous Turing test is introduced. The proposals that Turing makes in this paper have been the subject of considerable debate. In this talk, the details of Turing's claims will be examined closely and it will be argued that two importantly distinct claims need to be discerned, in order to make good sense of some of Turing's remarks. The first claim, which may be construed as an ontological one, relates to whether or not the class of entities that 'think' includes computational devices. The second claim, which is more of a semantic one, relates to whether or not we can meaningfully and coherently assert sentences concerning 'thinking' about computational devices. It is the second of these claims which will be the main focus of the rest of the talk. In particular, the slightly broader question of whether we can reasonably attribute mental properties or predicates to computational devices will be examined.

One substantial advantage of construing Turing's (1950) thesis in this manner, over and above enhancing the overall coherence of the text, is that this question can be subject to empirical study. This helps avoid the problems which arise with our notoriously unreliable intuitions. The bulk of the remainder of this talk focuses on the use of methods from large scale, web-based corpus linguistics to studying this question. Raw search engine searches for specific strings are compared with the results from Google Books Ngram methods (Michel, Shen, Aiden, et. al., 2011) and the WebCorp system (Kehoe and Renouf, 2002). Unfortunately, all these methods have problems and limitations, so the relative strengths and weaknesses of each method is also discussed. Prior to reaching a final conclusion, a number of objections to the claims advanced in this talk are discussed and it is argued that they do not make a compelling case against the main conclusion. Thus, it is argued that Turing's (1950) prediction that, “...[A]t the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” has come true.
ABSTRACT: ›Hide‹ In this paper it is suggested that generative models are an essential ingredient for human-like consciousness and that they can help to explain consciousness. What is meant by consciousness depends on the definition that is used, but we will refer to experiments with humans to show a difference between being conscious and not being conscious. As an example we refer to experiments with people with blindsight. These people have no damage to their eyes or retina, but the direct nerve connections to the visual cortex are cut or damaged or that part of the visual cortex itself is damaged. Those people report that they are not consciously aware of seeing anything, but unconsciously they could avoid objects or feel movements. The most plausible explanation for that is that there is also a pathway from the retina to older parts of the brain, like the brainstem, which indirectly can give signals to the higher level cortex.

So people have said that consciousness resides in the cortex, but how and why is it different compared to older parts of the brain like the brainstem for example?

We suggest that generative models are essential for consciousness.

In neural networks this requires us to have connections and signaling in two directions. On can be called bottom-up, which are the signals from the senses moving in mainly one direction through the network. The other direction can be called top-down, which starts in the higher-level parts of the brain and can generate electrical signals in the brain as if they came from the senses. Dreaming, visualizing internally or hearing sounds or music in your head are examples of that. In artificial intelligence we can refer to Helmholtz Machines, Deep Belief Networks and Hierarchical Temporal Memory for examples. In simpler neural networks like a feedforward neural network, the information only flows in one direction so this doesn’t contain a generative model. We predict that the older parts of the brain are also mostly one-directional which can be very fast and lead to automatic unconscious reflexes. They do not contain a hierarchical structure like the cortex which is needed to make good abstractions through a generative model.

Because we propose that generative models are essential for consciousness, it would explain why people with blindsight cannot consciously see because the generative network in visual cortex can’t be used. The older parts of the brain can detect movements or lead to instinctive reflexive behavior depending on visual input, so in that way the person can react to the world but not be consciously aware of it. Connections between the older parts of the brain and the higher level cortex could make the person indirectly aware of something in the visual field, but then we predict that the person experiences it as a kind of feeling, because it comes indirectly from other and older parts of the brain and not directly from an outside sense.

It has been discovered that some neurons fire both when a person thinks about or does an activity, or when it sees another person do the same. These neurons have been called mirror neurons, but in our model and interpretation of consciousness this phenomenon is also predicted and explained naturally. When thinking or visualizing something or interpreting what another person is doing or feeling, it means that generative models are activated that are learned abstractions of those concepts. For example in an experiment they found that in a patient one neuron always fired when the person saw or even only thought of Bill Clinton. We think that such neurons including mirror neurons are formed when the brain learns to make abstractions of causes in the world.

Self-consciousness can then be interpreted as having a model of the world that includes yourself.

The generative model can also make predictions and to simulate internally what would happen in the real world. This is also part of consciousness and can be interpreted as imagining or dreaming. The process of evaluating things and weighing choices and making decisions can be deterministic, but those processes in the brain can be interpreted as generating a kind of internal dialogue that is also an aspect of consciousness and the gives rise to an apparent Free Will.

All of the above suggestions and interpretations and principles do not exclude machines from having consciousness or the same apparent Free Will as we have.

By better understanding what consciousness is, we would be able to analyze if an animal or artificial agent could be conscious or not, or in what way or on what level. This could influence the ethical treatment and rights of animals and artificial agents.

A philosophical zombie agent would be able to pass the Turing test, but if we understand what consciousness and if we are able to analyze the inner workings of the agent we could then be able to know if it is conscious or not.

More work is needed, but these suggestions and ideas could give hints into which directions could be most fruitful in understanding consciousness and how to create intelligent machines with consciousness and with a similar experience of Free Will too.

We recommend to design and implement cognitive architectures that include generative models as an essential ingredient, and predict that artificial intelligent agents without generative models will probably never have human-like consciousness.
18:00
18:30
Peter Bokulich Fabio Bonsignorio Justin Horn, Nicodemus Hallin, Hossein Taheiri, Michael O'Rourke and Dean Edwards
The Physics and Metaphysics of Computation and Cognition
AI, Robotics, Neuroscience and Cognitive Sciences are aspects of or rely upon a new experimental science: the science of physical cognitive systems?
The Necessity of the Intentional Stance and Robust Intentional State-Ascription Regarding Unmanned Underwater Vehicles
ABSTRACT: ›Hide‹ In this paper, I articulate a physicalist ontology of emergent properties, and I argue that this provides a framework for addressing the relationship between computationalist and mechanistic accounts of the mind. My account takes as its foundation the concept of physical information, together with the related concepts of a dynamical system and a dynamical degree of freedom. I argue that all higher-level emergent entities and properties are the result of a reduction of the physical degrees of freedom (where these reductions are the result of aggregations and/or constraints imposed on the full set of degrees of freedom of the system). Physical information is then defined as a dynamically enforced correlation between degrees of freedom of distinct systems. The information content of a system can be formalized in terms of the von Neumann entropy of the state of the system.

I argue that viewing mental states as informational states of a neural system resolves several problems in philosophy of mind: First, it allows us to view computational models of cognition as a subset of dynamical systems models. The (digital) computational model involves discretized degrees of freedom and a limited set of dynamics, but to be physically acceptable, the computational process will have to be realized in more fundamental continuous mechanics. The question of the nature of the cognitive degrees of freedom will have to be decided empirically, but the underlying metaphysics will be the same, regardless of the precise details of the dynamics of neural systems.

Second, this informational account allows us to dispel various confusions about objectivity and subjectivity in physics and psychology. It is often argued that the mind cannot be reduced to the brain because the mental is subjective and the physical is objective. However, physical information is perfectly respectable from a physicalist point of view, and it is nevertheless subjective in several important respects: The information belongs to one particular system, and to no other. The correlations that make up the information are formed from a particular “point of view,” which (depending on the nature of the systems involved) may be unique. The information is intrinsically about some other system, and thus provides a natural grounding for intentionality.

Third, this dynamical account of information and computation reveals the flaws in arguments against the computational account of the mind. I address John Searle’s arguments specifically. Searle argues that nothing is intrinsically a computation and that one could treat any arbitrary system (e.g., the molecules in a wall) as an instantiation of an arbitrary computer program (e.g., a word processing program). However, this argument ignores the causal-dynamical aspect of any instantiation of a program; in fact, only carefully engineered systems (viz. computers) will have the counterfactual behavior specified by the computational model. We can give an ontologically robust account of which systems are instantiations of particular programs and which are not; this account will rely on the effective degrees of freedom of that system and the dynamics that governs the evolution of those degrees of freedom. Likewise, the fact that physical information has a point of view and aspectual shape undermines Searle’s claim that mental states cannot be ontologically reduced to physical states.

I conclude by arguing that physics provides us with very strong evidence for the truth of physicalism, and that this ontology requires that all systems are mechanical systems. However, as systems develop structure, some of the microphysical details become irrelevant to the functioning of the system. This allows for the emergence of higher-level states, which can then be correlated with other systems in the environment, and these correlations can themselves be manipulated in a systematic law-governed way. Computation is – at its metaphysical root – just such a manipulation of correlations. Information is real, and well-defined, even at the subatomic level. However, it is only when we have higher-level systematic manipulations of information that we have computation.
ABSTRACT: ›Hide‹ It is likely that, in AI, Robotics, Neuroscience and Cognitive Sciences, what we need is an integrated approach putting together concepts and methods from fields so far considered well distinct like non linear dynamics, information, computation and control theory as well as general AI, psychology, cognitive sciences in general, neurosciences and system biology.

These disciplines usually share many problems, but have very different languages and experimental methodologies. It is thought that while tackling with many serious ‘hard core’ scientific issues it is imperative, probably a necessary (pre)requisite, that we do serious efforts to clarify and merge the underlying paradigms, the proper methodologies, the metrics and success criteria of this new branch of science.

Many of these questions have already been approached by philosophy, but they acquire in this context a scientific nature: e.g.: Is it possible cognition without consciousness? And ‘sentience’? In the context of AI research various definition of consciousness have been proposed (for example by Tononi, [44], to quote an example liked by the author). How they relate to the previous and contemporary philosophical analysis? Sometimes scientist may look as poor philosopher, and the opposite: philosophers may look as poor scientist, yet, the critical passages of history of science during a paradigm change or the birth of a new discipline have often involved a highly critical conceptual analysis intertwined with scientific and mathematical advancements. The scientific enterprise is now somehow close to unbundle the basic foundation of our consciousness and of our apperception of reality, and, it is clear that there are some circularity issues with the possible ‘explanations’, at least.
ABSTRACT: ›Hide‹ In unmanned underwater vehicle (UUV) research at the University of Idaho, researchers are developing mission architectures and protocols for small fleets of UUVs. In the specifications of these types of missions, UUVs are regularly described using the language of intentional states, or states which refer to or are about something outside themselves. Examples of intentional states include, but are not limited to, goals, beliefs and desires.

How seriously are we to take these ascriptions of intentional states? Are the UUVs "true believers" in the sense that their intentionality is more robust, or are our ascriptions of intentionality merely a convenience of discourse that should not be given much weight? These questions frame the present agenda of the authors, who defend a version of the former position...

18:30-19:00 Coffee

19:00-20:00 Keynote Rolf Pfeifer, "Embodiment - powerful explanations and better robots"
[Location: ACT New Building Amphitheater]

20:00-21:30 Dinner (on site)

21:30 Bus transfer from Anatolia College
Bus 1: "Plateia Eleftherias", Port entrance, corner Ionos Dragoumi/Leoforos Nikis (seafront)
Bus 2: "Levkos Pyrgos" (White Tower) on Leoforos Nikis (seafront)


Tuesday, 04.10.2011

8:15 Bus transfer to Anatolia College (two buses)
Bus 1: "Plateia Eleftherias", Port entrance, corner Ionos Dragoumi/Leoforos Nikis (seafront)
Bus 2: "Levkos Pyrgos" (White Tower) on Leoforos Nikis (seafront)

9:30-11:00 Invited Talks

 Session A
(New Building, Conference Room)
Session B
(Bissell Library, 2nd floor)
 Chair: VilarroyaChair: Bokulich
9:30
10:15
Aaron Sloman Kevin O'Regan
The deep, barely noticed, consequences of embodiment. (Explicitly or implicitly criticising most embodiment theorists) How to make a robot that feels
ABSTRACT: ›Hide‹ In just over four decades of thinking about relationships between AI, Philosophy, Biology and other disciplines I have found that there are a number of requirements for progress in our understanding that are often not noticed, or ignored. In particular, our explanatory theories need to take account of a number of facts about the world and things that live in it, which together have deep implications for theories of mind, whether natural or artificial.

1. The universe contains matter, energy and information. (For an answer to 'What is information?' see [1]). Life is intimately connected with informed control. Almost all processes involving living things, including metabolism, use information to select among options provided by configurations of matter and energy, whereas inanimate matter behaves in accordance with resultants of physical and chemical forces and constraints.

2. The types of information, the types of control, and the types of problem for which informed control is required, are very varied, and changed dramatically in many different ways between the earliest life-forms and modern ecosystems including humans and their socio-economic superstructures. We need to understand that (enormous) diversity in order to understand the varieties of natural intelligence and in order to understand requirements for modelling or replication in artificial systems.

3. The earliest and most obvious uses of information are in "on-line" control of discrete or continuous forms of behaviour triggered or guided by sensory information -- and this may suffice for microbes in constantly changing chemical soups. Some researchers seem to think that's all brains are for, and some roboticists aim for little more than that in their robot designs.

4. As physical environments, physical bodies, and types of behaviours of prey, predators, conspecifics, and inanimate but changing features of the environment (e.g. rivers, winds, waves, storms, diurnal and seasonal cycles, earth-quakes, avalanches, etc.) all presented new and more complex challenges and opportunities, the types of behaviour, sensory-motor morphologies, forms of control, types of information, and forms of information-processing became more and more complex, especially in organisms near the peaks of food-pyramids with r/K trade-offs favouring K strategies (few, but complex, offspring [2]).

5. In particular, for some species, the importance of on-line control of interaction with the immediate environment declined, in some situations, in comparison with abilities to store and use information about the past, about remote locations and their contents, about possible futures, and about the information processing done by other individuals (e.g. infants, mates, competing and collaborating conspecifics, prey, predators etc) and by themselves (self-monitoring, self-debugging, selection between conflicting motives, preferences, hypotheses, etc.)

6. One consequence of all this was the increasing importance of informed control of information processing, as contrasted with informed control of actions in the physical environment. The need to be able to acquire, store, analyse, interpret, construct, derive, transform, combine and use many different types of information, including information about information, led to development (in evolution, in epigenesis and later in social-cultural evolution) of new forms of encoding of information (new forms of representation) new information-processing mechanisms and new self-constructing and self-modifying architectures for combining multiple information processing subsystems, including sensory motor sub-systems.

7. The ability to think about possibilities, past and future and out of sight, touch and hearing, as opposed to merely perceiving and acting on what is actual became especially important for some species and the requirements for such mechanisms are closely related to the development of mathematical capabilities in humans. For a partial analysis of the requirements see [3]. For links with development of mathematical competences in children and other animals see [4].

8. Because it is very hard to think about all of these issues, and how interdependent they are, most researchers (in philosophy, AI, robotics, psychology, neuroscience, biology, control engineering) understandably focus their research on a small subset. Unfortunately some of them write as if there is nothing else of importance, and that has been an unfortunate feature of many recent waves of fashion, including the fashion for emphasising only aspects of embodiment concerned with on-line interaction with the immediate environment, as opposed to aspects concerned with being located in an extended, rich, diverse, partly intelligible universe of which the immediate environment is a tiny fragment and in which not only what actually exists is important but also what might happen and constraints on what might happen[5], along with the invisible intangible insides of visible and tangible things, and their microscopic and sub-microscopic components.

9. A good antidote for some of this myopia is the work of Karmiloff-Smith on transitions in understanding micro-domains.[6]

10. When we have absorbed all that, perhaps we can attend to the requirement for much of the information processing to make use of virtual machinery as has increasingly been required in artificial information processing systems over the last six decades, including self-monitoring directed at virtual machine operations, not physical processes -- providing the roots of a scientific theory of qualia and the like, with causal powers. But first we have to understand the (mostly unobvious) requirements that drove it all.

Source: http://www.cs.bham.ac.uk/research...
ABSTRACT: ›Hide‹ Usually "feel" or, as philosophers often call it, "phenomenal consciousness" or "qualia", is considered a "hard problem" in consciousness research. There seems to be an "explanatory gap" preventing us from giving a scientific account of feel. I show how the "sensorimotor" approach, by redefining feel as a way of interacting with the world, overcomes this problem. From then on it becomes clear that there is no obstacle to making a robot that feels.
10:15
11:00
Tom Ziemke Ron Chrisley
Are robots embodied? Computation and Qualia: Realism Without Dualism
ABSTRACT: ›Hide‹ Are robots embodied? (to be announced)
ABSTRACT: ›Hide‹ I have argued before that, contrary to current conventional wisdom, it might be the case that qualia exist and yet dualism is false. Whether this is true in our own case seems to me to be an empirical issue that cannot yet be decided. What we *can* do before we are in a position to decide the issue is a) attempt to design and build artificial systems which have qualia but which are wholly physical; and b) design and build artificial systems -- possibly including those in a) -- such that by designing, building and interacting with them we acquire new capacities and concepts that put us in a better position to design and build more conscious-like systems. This spiralling "improved conception/improved design" iteration may eventually put us in a position to answer not only questions such as: "Can we build an artificial agent with qualia?", but also: "Do we have qualia?" and "Can we understand ourselves non-dualistically?" Even better, it may eventually put us in a position where we can see why these questions are flawed (if they are), and finally understand, and ask, the questions that are the right ones to ask.

11:00-11:30 Coffee

11:30-13:30 Sections (5 x 3)

 Session A
(New Building, Conference Room)
Session B
(Bissell Library, Teleconferencing Room)
Session C
(Bissell Library, 2nd floor)
 Chair: SteinerChair: Dodig-CrnkovicChair: Sandberg
11:30
12:00
Daniel Susser Micha Hersch Raffaela Giovagnoli
Artificial Intelligence and the Body: Dreyfus and Bickhard on Intentionality and Agent-Environment Interaction Life prior to neural cognition for artificial intelligence Computational Ontology and Deontology
ABSTRACT: ›Hide‹ Hubert Dreyfus’s groundbreaking work in the philosophy of mind has demonstrated conclusively that the body plays a fundamental role in all facets of intelligent, and indeed intentional, life. And in doing so he has put to rest once and for all the formalist fantasy of a purely algorithmic, disembodied mind. Dreyfus’s work, however, leaves unanswered a crucial, if obvious, question: namely, what is a body? That is to say, what are the features of a physical system which are necessary and sufficient for it to relate intentionally (and later, intelligently) to the world? This question forms a crucial intersection between philosophy of mind and the philosophy of artificial intelligence. For once the body is understood as constitutive to mental life, reverse engineering the mind comes necessarily to involve reverse engineering the body. In this paper, I argue (1) that Mark Bickhard’s interactivist theory of mind complements Dreyfus’s work, and (2) that it provides the conceptual tools needed to answer the question that Dreyfus’s work raises.

For both theorists, intentionality is ultimately grounded in the ability to interact successfully with the world. Dreyfus’s critique of information-processing theories of cognition and intentionality hinges on the notion that an account comprised entirely of discrete facts and algorithmic rules cannot sufficiently explain the way intelligent creatures mitigate the contextual, indeterminate nature of the human lifeworld. His alternative approach—a mixture of Heideggerian/Rylean ontology, gestalt psychology, Merleau-Pontian phenomenology, and his own analysis of skill acquisition—purports to do away with this problem by describing the phenomenological mechanisms by virtue of which we “skillfully” interact with the world and cope with its indeterminacy. Skills or “stored dispositions” are, for Dreyfus, pre-reflective reactions to the environment being this or that way, and the body—by virtue of brain architecture, environmental solicitations to action, and the tendency toward equilibrium with the environment—is understood as both the structuring force and reservoir of skills.

Similarly, for Bickhard, intentionality is bound up with agent-environment interaction. His theory centers on the notion of “recursive self-maintenance”—the capacity of an open thermodynamic system to react to variations in its environment in a way that contributes to the further functioning of the system. Thus where Dreyfus talks of skillful action providing the conditions for the possibility of anticipating meaning, Bickhard talks of the functional presuppositions of complex systems—i.e., causal dependencies which select certain system behaviors in response to certain environmental interaction conditions—providing the conditions for the possibility of recursive self-maintenance. On both accounts, it is the intentional organism’s pre-cognitive responses to environmental solicitations that ground meaningful interaction with the world. And on both accounts, it is the organism’s body that determines the type of environmental solicitations that will be afforded.

Combining these two approaches, I argue, allows us to specify precisely the features of a physical system required for skillful action, and thus also the types of physical systems capable of intelligence.
ABSTRACT: ›Hide‹ Since its appearance at the Dartmouth Conference in 1956, Artificial Intelligence (AI) has undergone a process of naturalization, moving from the symbolic computations of Good Old­ Fashion AI (GOFAI) [Haugeland, 1985], to the now widely accepted concept of embodied cognition. GOFAI was can be seen as an offspring of Hilbert's program, in the sense that it assumes that he “rules according to which our thinking actually proceeds” [Hilbert, 1928] can be syntactically represented in the form of logical rules. The tenants of this traditional artificial intelligence recycled Descartes' mind­body dualism as a software­hardware dualism.

However, the limitations of this approach became apparent when to came to implement intelligence in the “real world”, i. e., to relate the symbol to some perceptual reality [Harnad, 1990]. Those difficulties, along with technological progress in robotics, led to the resurgence of the by then somewhat forgotten cybernetics approach to artificial cognition [Cordeschi, 2002]. This approach provides a strong grounding into the environment as it considers intelligence in a robotic device endowed with sensors and actuators. It focuses on the control of low­level behavior and investigates how sensori­motor couplings can produce seemingly intelligent and purposeful behavior [Rosentblueth et al., 1943]. Combined with influences from Varela's enactive theory of cognition [Varela et al., 1991], this led to the appearance of embodied cognition as a new framework for the study of artificial intelligence. According to this theory, intelligence cannot exist in a vacuum, but must be grounded in an environment through a body. Cognition emerged to enable to adequately guide the actions of the body in a given environment. Thus, it is tuned to a particular body and a particular environment and cannot be considered independently from them [Pfeifer & Bongard, 2007]. It comes as no surprise that this theory became quite popular within the robotics community, as it suggested that robotics is the only valid synthetic approach to cognition as it deals with cognition in a robotic body situated in an environment. Indeed, robotics is the discipline of choice for a synthetic approach to embodied cognition, as it provides both the body and the mind. Robotics research inspired by this approach has recently made interesting progress and has in turn attracted the interest of philosophers [Metzinger, 2008].

However, Varela's contribution to the understanding of cognition goes further than this. To him, through the concept of autopoiesis, cognition is not restricted to what is classically understood as thinking [Varela 1989]. Indeed, the immune system can be seen as a cognitive system and, more generally, any living system is a cognizing system, It this view, intelligence is an outgrowth of the biological regulation which is taking place in living systems. Thus a better understanding of intelligence must go through the study of proto­neural cognition, for example the way plants can anticipate and adapt to changes in their environment. [Kami et al. 2010]. This will enable researchers to go beyond a neurocentric view of cognition and get a new broader view of what cognition is about.

Although necessary, it is not sufficient. Ultimately, it will be necessary to study the articulation of intelligence and life itself to gain more insight into the nature of intelligence and be able to create it. Indeed, it is difficult to conceive intelligence without a purpose. In natural systems, the purpose imposed by the Darwinian selection is the perpetuation of the specie, i.e. of a particular form of life. For artificial systems, roboticists are still (like in the early cybernetics days [Rosenblueth et al. 1943]) confronted to the problem of how to endow the robot with a sense of purpose, and no really satisfying solution has yet been put forth.

To adress this issue, I argue that the philosophy of life of Hans Jonas [Jonas, 1966], who also sees a continuity between the organic and the mind is a good basis to explore the relationship between life and intelligence and subsequently how purposeful intelligence can appear in artificial systems, whether they are made of inert or animated material.
ABSTRACT: ›Hide‹ I would like to briefly discuss an interesting argument from the recent book of John Searle Making the Social World (Oxford 2010) that tries to consider the construction of a society as an “engineering” problem and concludes that deontology works against the “computational” or “algorithmic” view of consciousness.

Deontology is an aspect of human creativity through the performance of speech acts (Searle 2010, chap. 4). So, for example, the man who says “This is my property” or the woman who says “This is my husband”, may be creating a state of affairs by Declaration. A person, who can get other people to accept this Declaration, will succeed in creating an institutional reality that did not exist prior to Declaration. We have two cases: first, by Declaration a certain person or object X counts as Y (status entity with a precise function) in C (context); second, We (or I) make it the case by Declaration that a certain status function Y (such as corporations or electronic money) exists in C (context). The deontic aspect of the use of language would distinguish therefore humans from robots. I’ll sketch Searle’s argument against the computational model (1) and I’ll criticize Searle’s reasons to warrant his criticism (2).
12:00
12:30
Massimiliano Cappuccio Stefano Franchi Roman Yampolskiy
Inter-context frame problem and dynamics of the background The Past, Present, and Future Encounters between Computation and the Humanities Artificial Intelligence Safety Engineering: Why Machine Ethics is a Wrong Approach
ABSTRACT: ›Hide‹ The embodied and situated approaches to the philosophical foundation of Artificial Intelligence are broadly influenced by Hubert Dreyfus’ philosophy of absorbed coping and by Martin Heidegger’s phenomenology of skilled expertise. These approaches point out that the main obstacle in reproducing by mechanical means the flexibility and rich adaptivity of human intelligence is the Frame Problem, that is the problem of building a computational system that can process information and produce behavior in a manner that is fluidly sensitive to context-dependent relevance. The frame problem occurs because the representations and the heuristics stored by the system are never sufficient to determine which information is relevant in a certain context: hence the frame problem arises when the behavior of the machine is entirely mediated by representations because the indefinite complexity of a certain context is never exhausted by the representation of that context.

According to Michael Wheeler, the frame-problem is double headed, as it can be either intra or inter contextual in character. While the former challenges us to say how a system is able to achieve flexible and fluid action within a context, the latter challenges us to say how that system is able to flexibly and fluidly switch between an open-ended sequence of contexts in a relevance-sensitive manner. This distinction is allowed by Dreyfus phenomenology of coping, because while local forms of directed coping aim to attune the system conduct to certain contexts, they all differ from a more fundamental form of background coping, selecting the preconditions that are relevant to switch from one context to another. Dreyfus seems to agree with Wheeler that directed coping can be mediated by minimal representations of a context, without this producing a Frame Problem (for example when skillful behavior is troubled and coping occurs in problematic or unfamiliar contexts). But Dreyfus also claims that background coping is entirely non representational in nature, and that even action-oriented representations don’t play any role in context-switching (if they did, then a frame problem would arise).

In my paper I’m going to argue that even background coping can be mediated by action-oriented representations, as they serve to prepare one agent to appropriately switch from one context to another. Studies of embodied intelligence have often tended to focus on the essentially responsive aspects of bodily expertise, but skilled performers often execute ritual-like gestures or other fixed action routines as performance-optimizing elements in their pre-performance preparations, especially when daunting or unfamiliar conditions are anticipated. Such ritualized actions summon more favorable contexts for their accomplishments, by uncovering viable landscapes for effective action. While the kinds of embodied skills that have occupied many recent theorists serve to attune behavior to an actual context of activity, whether that context is favorable or not, preparatory embodied routines actively refer to certain potential (and thus non-actual) contexts of a favourable nature that those routines themselves help to bring about, indicating the possibilities of actions disclosed by the desired context. This disclosive and transformative event is constitutively mediated by action-oriented representations, as their function consists exactly in referring to whole contexts of action without presupposing their full description; and since this event serves to fluidly switch to more appropriate contexts of action, even background coping is mediated by action-oriented representations, pace Dreyfus.

The human ability to skillfully indicate and reconfigure contexts is intimately intertwined with the widely cited phenomenon of the background, i.e. the implicit and plausibly endless chains of preconditions (bodily, attitudinal, social, cultural) that provide the context-dependent meaning and normative relevance conditions for any specific intelligent action. In this paper I’m going to deepen Dreyfus’ notion of background, with the intention to critically discuss some of its features, while endorsing others: in order to argue that background coping necessarily needs (and can be mediated by) a minimal form of representation, I’m going to point out three essential features of the background that are relevant to understanding the problem of context switching in Artificial Intelligence.

First, the background is inhabited as a unitary set of holistic conditions, but this doesn’t mean that the whole network of background preconditions is always equally involved in every kind of coping, because, for each form of directed coping, some elements or modality of background coping may exclude others. The background is a vast web of significance that we always inhabit from a situated perspective, and different profiles of the background matter in different degrees at different times. Were background knowledge always equally present in all its aspects, there would be no need for the disclosive function of preparatory gestures.

Secondly, directed coping, whether non representational or minimally representational, is always underpinned by background coping, but the background can in turn be modified by ongoing, concrete acts of directed coping. The background provides an ontological platform for our situated experience, but not as an immobile ground. Rather, the background is a relatively stable scaffold that orients our everyday engagements within the world. Directed coping continuously re-founds the background by dynamically modifying the normative preconditions it embodies, and the background in turn provides the normative preconditions for further directed coping, according to the general Heideggerian schema of the hermeneutic circle.

Finally, background coping is not guided by full-blooded representations of a cognitivist kind, because the background can’t be reduced to a body (however vast) of explicitly represented information, beliefs, or stored heuristics. That said, our access to the background is often mediated and articulated by action-oriented representations, sometimes disclosed by preparatory gestures. This means that it is a minimally representationalist approach to intelligence that brings the background and its dynamics into proper view. The problem of relevance indicates that background coping cannot be understood on the cognitivist model, as a rational process of deliberation using full-blooded representations. But, neither can background coping be understood, or at least not exhaustively so, as an unreflective and nonrepresentational selection of past contexts.
ABSTRACT: ›Hide‹ The paper addresses the conference theme from the broader perspective of the historical interactions between the Humanities and computational disciplines (or, more generally, the “sciences of the artificial”). These encounters have followed a similar although symmetrically opposite “takeover” paradigm. Explain what you mean by takeover before going onHowever, there is an alternative meeting mode, pioneered by the interactions between studio and performance arts and digital technology. A brief discussion of the microsound approach to musical composition and the T-Garden installation shows that these alternative encounters have been characterized by a willingness on both parts to let their basic issues, techniques, and concepts be redefined by the partner disciplines. I argue that this modality could (and perhaps should) be extended to other Humanities disciplines, including philosophy.
ABSTRACT: ›Hide‹ Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence/robotics communities. We will argue that the attempts to allow machines to make ethical decisions or to have rights are misguided. Instead we propose a new science of safety engineering for intelligent artificial agents. In particular we issue a challenge to the scientific community to develop intelligent systems capable of proving that they are in fact safe even under recursive self-improvement.
12:30
13:00
Harry Halpin Colin Schmidt and Kamilla Johannsdottir Stuart Armstrong
Becoming Digital: Reconciling Theories of Digital Representation and Embodiment Simulating Self in Artifacts Thinking inside the box: using and controlling a dangerous Oracle AI
ABSTRACT: ›Hide‹ One of the defining characteristics of information on the “actually-existing” computational mechanisms ranging from the World Wide Web to word-processors is that they deal in information that is – or at least seems to be – robustly digital, bits and bytes. Yet there is no clear notion of what `being' digital consists of, and a working notion of digitality is necessary to understand our computers, if not human intelligence. Yet there has been over the last twenty years a movement against digital representations as important to AI, with instead the focus moving to the more biologically realistic work around dynamic systems and neural networks. Much of this work, influenced the theory of autopoiesis of Maturana and Varela to the anti-representationalist work of Rodney Brooks, decries even the existence of representations and information. At the present movement as given by the work of around extended-embedded intelligence by Clark and Wheller, there seems to be a movement to incorporate the environment into a brave new research for AI. This environment would definitely include computers, the Web, and other rather intuitively representational and information-carrying behavior.

Can representations and information be reconciled with embodiment? On the surface a term like `representation' seems to be what Brian Cantwell Smith calls ``physically spooky,'' since a representation can refer to something with which it is not in physical contact. This spookiness is a consequence of a violation of common-sense physics, since representations appear to have a non-physical relationship with things that are far away in time and space. This relationship of `aboutness' or intentionality is often called `reference.' While it would be premature to define `reference,' a few examples will illustrate its usage: someone can think about the Eiffel Tower in Paris without being in Paris, or even having ever set foot in France; a human can imagine what the Eiffel Tower would look like if it were painted blue. Furthermore, a human can dream about the Eiffel Tower, make a plan to visit it, and so on, all while being distant from the Eiffel Tower. Reference also works temporally as well as distally, for one can talk about someone who is no longer living such as Gustave Eiffel. Despite appearances, reference is not epiphenomenal, for reference has real effects on the behavior of agents. Specifically, one can book a plane ticket to visit the Eiffel Tower after making a plan to visit it. We present the “representational cycle”, based on the work of Cantwell Smith, which formalized s version of a non-spooky theory of representations by showing how representations “obtain” their semantics over space-time dynamics – which is essence not incompatible with embodiment.

However, we are not only interested in representations, but in digital representations. As argued by Mueller, without a coherent definition of digitality, it is impossible to even in principle answer questions like whether or not digitality is purely subjective. As noticed by Cantwell Smith, one philosophical essay that comes surprisingly close to defining digitality is Goodman's Languages of Art. Given some physically distinguishable marks, which Goodman defined as ``finitely differentiable'' when it is possible to determine for any given mark whether it is identical to another mark or marks. This can be considered equivalent to how in categorical perception, despite variation in handwriting, a person perceives hand-written letters as being from a finite alphabet – close to the philosophical notion of types. Digital systems are the opposite of Bateson's famous definition of information: Being digital is simply having a difference that does not make difference. So in an analogue system, every difference in some mark makes a difference, since between any two types there is another type that subsumes a unique characteristic of the token. In this manner, the prototypical digital system is the discrete distribution of integers, while the continuous numbers are the analogue system par excellence. In order to tie representations to digitality, we can proceed if we distinguish content and form, namely that digitality is a notion close of “form” and representations as a kind of non-local content determined by local conditions.
ABSTRACT: ›Hide‹ Simulating Self in Artifacts
ABSTRACT: ›Hide‹ There is no strong reason to believe human level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed goals or motivation systems. Solving this issue in general has proven to be considerably harder than expected. One suggestion that has been often mooted is to isolate the AI from physical interactions with the outside world, restricting it to merely answering questions. Such an AI is often referred to as an “Oracle AI” (OAI) or AI-in-a-box.

Evaluating the danger posed by such an OAI depends on certain key assumptions on the nature of general intelligence, the nature of human intelligence, and the space of possible minds. However, under certain not unreasonable assumptions, an OAI is barely more secure than an unrestricted AI (and could be considerably less so, were its boxed nature to inspire us to overconfidence). The main avenues open to such an OAI pursuing its goals are through social engineering and bribery – an economically or scientifically adept OAI could offer the world to its human minders. But it is through social engineering that it has the most potential of escaping its restrictions. This is not merely a theoretical fear: informal experiments by Eliezer Yudkowsky have demonstrated that smart humans can often ‘un-box’ themselves purely through typed interactions, within an hour. The first part of the talk will thus be dedicated to understanding how much risk a boxed AI poses.

Though there may be diminishing returns to social intelligence, it is clear that boxing in and of itself is not a major impediment to a potentially dangerous AI. The main attraction of the approach is that there are many supplementary precautions that can be imposed on an OAI that are not available for a general AI. Though none of these “methods of control” are sufficient for constructing a safe OAI, they each add a level of security, and move the field of OAI safety into a more fruitful practical domain: they can be critiqued and improved, holes identified and filled. This avoids either of the extremes of ‘no need for precautions’ or ‘solve everything formally before even designing the AI’.

These methods of control can be broadly grouped into three categories. The first is capability control: these reduce the potential for dangerous OAI behaviour by limiting the physical and epistemic tools available to it. Next is motivational control: seeking to ensure the OAI has the correct internal motivational structure to behave as we wish it to. Lastly comes checks and balances: methods devoted not to forbid dangerous behaviour, but to catch it before it is too late.

Motivational control is the area with the most potential. Capability control is essential, bun insufficient, and the issues are relatively easy to understand and address; it is unlikely that future research will result in surprising new breakthroughs here. Checks and balances are good, but we cannot rely solely on tricking or catching out an entity potentially much smarter than ourselves.

In order for motivational control to work, we will first need to establish what exactly we would want the OAI to be motivated to do – a highly non-trivial task, as we need to estimate the consequences of a particular motivational structure given to a super-intelligent being, without ourselves being super-intelligent. Then we need to ensure that the motivational structure is indeed well anchored into the OAI. There is a tension between these requirements: it is may be easy to check that an OAI may be implementing a particular utility function, but much harder to be sure that that utility function leads indeed to what we need. Structures such as trained neural nets are somewhat opposite: if it follows its training ‘correctly’, the outcome will most likely be positive, but it is a lot harder to check that it is indeed doing so. The main portion of the talk will be dedicated to exploring these issues, and comparing and contrasting the different methods of control.
13:00
13:30
Malinka Ivanova Anthony Morse David Anderson
Towards Intelligent Tutoring Systems: From Knowledge Delivering to Emotion Understanding Snap-Shots of Sensorimotor Perception Machine Intentionality, the Moral Status of Machines, and the Composition Problem
ABSTRACT: ›Hide‹ Contemporary Intelligent Tutoring Systems (ITSs) are developed in the form of separated software or as applications that are parts of whole eLearning environment. In both cases ITSs contain component/s with different level of artificial intelligence (AI), incorporating techniques for communication and/or knowledge and skills transfer to students [1]. The functionality of traditional ITSs is ensured by four components: domain model related to the content of a given subject matter, student model containing data about the student’s performance, teaching model with pedagogical scenarios adapted to students’ needs and achievements, presentation model showing user interface elements. Although, there are a wide variety of ITSs’ solutions according to the achieved level of intelligence and flexibility of the eLearning environment. Jeremić et al. propose a teaching model divided into a pedagogical module and an expert module. The expert module is used for decisions making about generated teaching plans and adaptive presentation of the teaching material that are used by the pedagogical module [2]. The main aim is the flexibility of the system to be increased. Martens presents a tutor who supports cognitive processes of a learner, including two processes: of knowledge application and of diagnostic reasoning in the field of medicine [3]. Development of the emotional module consisting of affective learning companion that should be able to detect learner emotions and respond with appropriate levels of support is discussed in [4]. In this way a learner will be focused on the task, but boredom and frustration still persist. Boredom might be disrupted through motivation enhancing techniques [5]. The motivational aspects of instructions in ITSs are explored in [6] and authors describe several approaches for motivation diagnosis. They agree that one ITS could be effective only if it could detect emotions in order to be able to empathise with its user and they suggest several areas for further research: other communication channels, eliciting motivation diagnosis knowledge, self-report approaches, models of motivation, individualised motivation diagnosis.

Nowadays, different researchers tried to find out an approach of ITS that is more “human” and this survey shows that much work in this field must be done. Also, evidence for this could be a recently published study [7] which compares human-human computer mediated tutoring with two computer tutoring systems based on the same materials but differing in the type of feedback provided. The investigation shows significant differences between the human tutor and computer tutors, but also between different types of computer tutors.

The aim of this contribution is to examine the current developmental level of intelligence in ITSs (part of ITSs are shown in Table 1) presenting several views and aspects on their building components. An analysis of the concepts behind them will facilitate the understanding about which artificial intelligence techniques and applications could be applied to the future development of ITSs to be more effective for each learner. Also, the analysis addresses solutions suitable for implementation in engineering education.
ABSTRACT: ›Hide‹ Sensorimotor theories of perception are highly appealing to A.I. due to their apparent simplicity and power; however, they are not problem free either. This talk will present a frank appraisal of sensorimotor perception discussing and highlighting the good, the bad, and the ugly with respect to a potential sensorimotor A.I.

At the heart of sensorimotor perception is the idea that perception is to some large extent based upon predictions of the future sensory consequences of various potential actions. This simple idea has far-reaching and appealing implications for A.I. Perhaps the first and most obvious implication is that nothing special has been said about vision, or auditory processing, or any other modality, and so specialized methods for this or that modality or domain are not required, the same method is used whatever the form of information / activity / data happens to be. Next the move from, predicting how head movements change visual sensory contact with objects, (for example identifying the profile of a round object) to predictions of more complex actions, would seem intuitively to result in the perception of affordances. In fact on a sensorimotor account our perception of the world and things in it is very much affordance based. In contrast to traditional theories of concept acquisition which require additional machinery / mechanisms to make use of concepts, senorimotor perceptions tell you exactly how to interact with the world, to perceive a chair is to know how to interact with it. So potentially little if any additional mechanisms are required to make use of the resulting concepts. Sensorimotor perception then is clearly a theory of cognition as opposed to one of minimal cognition or mere concept formation but can it be made to work for A.I.?

Ultimately we think so, but in its current form sensorimotor perception suffers from the Frame Problem. Put simply one must set out to perform a number of simulated actions, the results of which will reveal the profile of interactivity (or affordances) of the currently unknown object in front of you; however, which of the infinite possible actions one could simulate will actually reveal the objects identity. The frame problem is a biggie; theories that succumb to it rarely survive, but here new and highly suggestive data from neuropsychology may provide a surprising way out of the problem. The surprising aspect is that to survive sensorimotor perception may have to embrace precisely the kind of theory that it purports to be in opposition to, i.e. the snapshot hypothesis.

From the outset sensorimotor perception has been portrayed in opposition to the snapshot hypothesis in which static visual scenes are analysed in detail to reveal visual features (lines, orientations, edges, gradients, shapes, etc…) to reveal the identity of the objects in that scene. And yet we have a schism in that while the biology of the visual cortex somewhat supports the snapshot hypothesis, the psychology and phenomenology of our experience clearly falls on the side of sensorimotor perception. New data from neuropsychology however suggests that early visual processing primes simple motor plans which in turn prime higher areas of the visual cortex. The result would seem to support a hybrid sensorimotor-snap-shot theory in which initially low level features are identified from visual or other modalities (such as grasp points) leading to simple motor plans. These motor plans then serve to prime or focus the extraction of further more complex features resulting in more complex simulations and so on. Following such an iterative method the frame problem can be avoided and useful approaches for A.I. can be developed.
ABSTRACT: ›Hide‹ According to the most popular theories of intentionality, a family of theories we will refer to as “functional intentionality,” a machine can have genuine intentional states so long as it has functionally characterizable mental states that are causally hooked up to the world in the right way. This paper considers a detailed description of a robot that seems to meet the conditions of functional intentionality, but which falls victim to what I call “the composition problem.” One obvious way to escape the problem (arguably, the only way) is if the robot can be shown to be a moral patient – to deserve a particular moral status. If so, it isn’t clear how functional intentionality could remain plausible (something like “phenomenal intentionality” might be required). Finally, while it would have seemed that a reasonable strategy for establishing the moral status of intelligent machines would be to demonstrate that the machine possessed genuine intentionality, the composition argument suggests that the order of precedence is reversed: The machine must first be shown to possess a particular moral status before it is a candidate for having genuine intentionality.
13:30
14:00
Sam Freed Aziz F. Zambak Mark Bishop, Slawomir Nasuto and Bob Coecke
Liberating AI from Dogmatic Models The Frame Problem Quantum picturalism and Searle's Chinese room argument
ABSTRACT: ›Hide‹ Liberating AI from Dogmatic Models (to be announced)
ABSTRACT: ›Hide‹ In AI, it is difficult to situate environmental data in an appropriate informational context. In order to construct machine intelligence, we have to develop a strategy of reasoning for adapting data and actions to a new situation. The frame problem is the most essential issue that an AI researcher must face. It is a litmus test for the evaluation of theories in AI. In other words, the frame problem is the major criterion for understanding whether a theory in AI is proper or not. In several of his writings, Dreyfus considered the frame problem as a major challenge to AI. According to Dreyfus and Dreyfus, computers, designed in terms of the classical AI theory, do not have the skills to comprehend what has changed and what remains the same because computers are the operators of isolated and pre-ordained data-processing. The frame problem shows the necessity of having an agentive approach to thought, cognition, and reasoning. It implies the necessity for an efficient informational system that provides machine intelligence with accessible and available data for use in the world.

What is the frame problem? Simply stated, the frame problem is about how to construct a formal system (e.g., machine intelligence) that deals within complex and changing conditions. The main issue behind the frame problem is to find a proper way to state the relationship between a set of rules and actions.

The frame problem has various definitions. This variety is caused by divergent views with regard to the categorization of the frame problem. In our opinion, it is possible to categorize the frame problem under three main groups; namely, metaphysical, logical, and epistemological.

The Metaphysical Aspect of the Frame Problem is about practical studies conducted in order to find and implement general rules for an everyday experience of the world. These practical studies should include spatio-temporal properties of environmental data. How to update beliefs about the world when an agent comes face to face with a novel (or unknown) situation is part of these practical studies. Cognitive science, especially drawing information from domains of an agent’s cognitive actions, is seen as a part of these practical studies. For example, pattern recognition can be seen as a metaphysical aspect of the frame problem.

The Logical Aspect of the Frame Problem: If you push a box, then you push also all its content. This is commonsense reasoning and some philosophers see the frame problem as a part of commonsense reasoning and logic. The logical aspect of the frame problem is about the axiomatization of an application domain in which some causal laws for an event (or action) should be predetermined. This predetermination includes stating some set of rules. Each set of rules carries potential information about certain statements. But it is important for an agent to find a proper way to create a new set of rules in an unknown (novel) situation. Reasoning is the most crucial issue for the analysis of the logical aspect of the frame problem.

The Epistemological Aspect of the Frame Problem: In AI, there is a tendency to define the frame problem as an epistemological dilemma. The epistemological aspect of intelligence concerns the representation of the world; and the heuristic aspect of intelligence deals with practical issues such as problem solving.

We consider the frame problem in AI to be a logical problem. Therefore, only a proper logical model can provide machine intelligence with relevant knowledge, action, and planning in a complex environment. In AI, complexity is a logical issue defined in terms of formal and computational items. The solution of the frame problem depends on the manner of reorganizing data-processing in terms of changes in the world. This kind of reorganization is possible only by using a proper logical model. The frame problem is not the problem of a machine intelligence designer but the problem of the machine itself. That is to say, the logical model embodied in machine intelligence is sufficient for describing the elements of a complex situation and finding the relevant action and plan. If the frame problem remains a problem of designers, it will never be solved. The way humans perform reasoning about changes and complexities in the environment cannot be modeled by AI. Machine intelligence requires a transformational logical model peculiar to its hierarchical organization. In other words, we see the trans-logical model, which will be explained in the presentation, as a proper methodological ground for developing a reasoning model in an agentive system.

We attribute a constructive and regulative role to logic in AI to find a proper (ideal) way of reasoning for machine agency. For the realization of such roles, a logical model that can operate in complex situations and overcome the frame problem should be developed. In the presentation, some basic principles for a logical model in AI will be proposed. The main idea behind the trans-logic system is that in AI, reasoning is based on the idea of using data (S-units and M-sets) and operating successive processes until the final information is achieved (realized). These processes are of two kinds. The first kind is S-unit replacement, where any S-unit and/or M-sets in the data processing are interchanged within one or more S-units and/or M-sets. The second kind is context-dependency, in which context-free and context-dependent rules are described. A context-free role indicates that S-units can always be interchanged with another one. A context-dependent rule implies that the replacement of M-sets is possible only in a pre-ordained context. In our trans-logic system, S-units are microstructures that are autonomous and transitional, able to pass from one condition to another, to be integrated into larger M-sets, with the partial or total loss of their former structuring in favor of a new reasoning function. S-units are micro-models with a transformational structure that can be integrated in larger programming units and thereby acquire functional significations corresponding to their positions in these larger programming units such as M-sets. The configuration between S-units and M-sets is done by various logical models. Therefore, a trans-logic system includes concomitant logics which have various functions for reasoning processes in machine intelligence.
ABSTRACT: ›Hide‹ Perhaps the most famous critic of computational theories of mind is John Searle. His best-known work on machine understanding, first presented in the 1980 paper ‘Minds, Brains & Programs’, has become known as the Chinese Room Argument (CRA). The central claim of the CRA, are that computations alone cannot, in principle, give rise to cognitive states, and that they therefore computational theories of mind cannot fully explain human cognition. More formally Searle (1994) stated that the Chinese Room Argument was an attempt to prove the truth of the premise:

1. Syntax is not sufficient for semantics

… which, together with the following two axioms:

2. Programs are formal, (syntactical)

3. [Human] Minds have semantics, (mental content)

… led him to conclude that ‘programs are not minds’ and hence that computationalism - the idea that the essence of thinking lies in computational processes and that such processes thereby underlie and explain conscious thinking - is false.

In the CRA Searle emphasizes the distinction between syntax and semantics to argue that while computers can follow purely formal rules, they cannot be said to know the ‘meaning’ of the symbols they are manipulating, and hence cannot be credited with ‘understanding’ the results of the execution of programs those symbols compose. In short, Searle claims that Artificial Intelligence (AI) programs may simulate human intelligent behaviour, but not fully duplicate it.

Although the last thirty years have seen tremendous controversy over the success of the CRA, there has emerged a great deal of consensus over its impact: Larry Hauser called it, “perhaps the most influential and widely-cited argument against the claims of Artificial Intelligence”; Stevan Harnad, editor of Behavioural and Brain Sciences, asserted that, “it [the CRA] has already reached the status of a minor classic”; Anatol Rapaport claims it, “rivals the Turing test as a touchstone of philosophical inquiries into the foundations of AI” and in 2002 Mark Bishop co-edited a volume of essays reflecting new analysis of the argument, (Preston & Bishop, OUP). Yet even in 2002 it was clear that the many of those within cognitive science remained at the very least suspicious of Searle's logic, if not downright hostile to the argument.

In an attempt to shed an experimental light on a hitherto obfuscated philosophical subject, in 1999 Hare and Wang reported on an empirical ‘replication’ of Searle’s philosophical thought experiment. Their work sought to experimentally demonstrate the power of Searle's intuition, however in effectively viewing language as a ‘simple code’, their implementation was perhaps too linguistically impoverished to draw any strong conclusions about the broader success or otherwise of the CRA.

Nevertheless, despite its detractors, in the intervening 30 years since its first publication, the CRA has undoubtedly helped presage a widespread dissatisfaction with disembodied computationalism (as exemplified in classical GOFAI approaches to AI) towards the new orthodoxy of an embodied, embedded cognitive science and this influence has lead many in the field now to have some sympathy with Searle's conclusions, if not the formal exposition of the Chinese room argument itself.

On the other hand, in the apparently unrelated context of theoretical physics, one scientist at the University of Oxford wondered why it took 50 years since the birth of the Quantum mechanical formalism to discover, for example, that unknown Quantum states cannot be cloned; or why it took 60 years to discover the [easily derivable] physical phenomenon of ‘Quantum teleportation’? That scientist - Bob Coecke - suggested the underlying reason to be that the standard Quantum mechanical formalism doesn't support our intuition, nor does it elucidate the key concepts that govern the behaviour of the entities that are subject to the laws of Quantum physics. The arrays of complex numbers are kin to the arrays of 0s and 1s of the early days of computer programming practice: the Quantum mechanical formalism is too ‘low-level’ hence Coecke recently suggested a symbolic - diagrammatic ‘high-level’ alternative - for the Hilbert space formalism[1].

The diagrammatic language Coecke developed allows for intuitive processing of representations of interacting Quantum systems, and trivialises many otherwise complex computations. As a process it supports ‘automation’ and enables a computer to ‘reason’ about interacting Quantum systems, prove theorems, and design protocols.

The underlying mathematical foundation of this high-level diagrammatic formalism relies on so-called monoidal categories, a product of a fairly recent development in mathematics. Effectively Coecke's new symbolic, diagrammatic scheme for Quantum physics defines a new symbolic language for describing and processing Quantum physics systems.

In this paper we will suggest that, in the absence of any foundational grounding in Quantum physics, although mastery of the iconographic language of Quantum Picturalism may be sufficient to establish interesting new results in Quantum physics, without any grounding in the principles of Quantum physics, the user of the ‘rule-book’ of Quantum Picturalism is as ignorant of any new statements in Quantum physics that they produce, as Searle is of Chinese; in other words we suggest that Coecke's formalism offers an alternative route to support Searle's 1980 core intuition :- that ‘syntax is not sufficient for semantics[2]. Perhaps this is unsurprising as the monoidal category framework not only provides a natural foundation for physical theories, but also for proof theory, logic, programming languages, biology or even cooking,...

13:30-14:30 Lunch

14:30-15:15 Invited Talks

 Session A
(New Building, Conference Room)
Session B
(Bissell Library, 2nd floor)
 Chair: AbramsonChair: Bishop
14:30
15:15
Nick Bostrom Matthias Scheutz
Superintelligence: The Control Problem Does it have a mind? The urgent need for architecture-based concepts and analyses in AI
ABSTRACT: ›Hide‹ Superintelligence: The Control Problem (to be announced)
ABSTRACT: ›Hide‹ Up-and-coming AI and robotic systems will increasingly challenge typical human assumptions about the possible mentality of artifacts. Questions about whether machines can have thoughts, emotions, feelings, attitudes, and the like will inevitably arise in the context of sustained interactions of autonomous artificial agents with humans. And trailing questions about agency, intentionality and ultimately moral responsibility of artifacts will probe our requisite ordinary human notions. In this talk, I will argue for the need to define mental and other concepts related to agents in terms of capacities of agent architectures in an effort to provide testable criteria for whether an artificial system can have and actually does instantiate particular mental states. This will become a critical basis for future discussions about the societal and moral status of advanced AI and robotic systems.

15:30-16:30 Keynote James H. Moor, "Robots and Real World Ethics"
[Location: ACT New Building Amphitheater]

ABSTRACT: ›Hide‹ Ethics is sometimes viewed as too incoherent, imprecise, impractical, unjustifiable, irrelevant, or impossible to implement in machines. But with the likelihood of computers, particularly autonomous robots, having major impacts on our lives in the future, ethical considerations must be taken into account when building them. This talk will explore how that can be done.

16:30 Bus transfer from Anatolia College
Bus 1: "Airport"
Bus 2: "Levkos Pyrgos" (White Tower) on Leoforos Nikis (seafront) and then "Plateia Eleftherias", Port entrance, corner Ionos Dragoumi/Leoforos Nikis (seafront)

[The schedule emphasizes ample space for informal interaction between participants. We currently foresee 60 minutes for keynotes, 45 minutes for invited talks, 30 minutes for section talks - including discussion.]


Wednesday, 05.10.2011

9:15 Bus transfer to Anatolia College (one bus)
Stop 1: "Plateia Eleftherias", Port entrance, corner Ionos Dragoumi/Leoforos Nikis (seafront)
Stop 2: "Levkos Pyrgos" (White Tower) on Leoforos Nikis (seafront)

Workshop "PhiloWeb 2011"

The Second International Symposium on the Web and Philosophy', 10-17:00 (Bissell Library, Ground Floor, L1).

17:00 Bus transfer from Anatolia College
Stop 1: "Levkos Pyrgos" (White Tower) on Leoforos Nikis (seafront)
Stop 2: "Plateia Eleftherias", Port entrance, corner Ionos Dragoumi/Leoforos Nikis (seafront)