Abstract: The Internet of Things is more than a new set of technologies: it provides a new way of understanding how things in the world shape our understanding of being in the world. This paper uses (post)phenomenology as a framework to examine the effects of connected devices as means of the co-construction of self and world. It looks beyond the technological implications of the Internet of Things to the social, cultural, and ontological potential it holds.
The gaming industry is a multi-billion dollar business , constantly on the hunt for innovations. Lately, motion capture techniques have been used to create realistic and persuasive animations. Immersive virtual environments are one of the technologies being developed to support the motion capture actors’ work.
In this paper we investigate the ethical implications of introducing immersive virtual environments within motion capture. Firstly we provide an overview of research in computer games ethics, virtual realities and acting, as well as a discussion to help finding solutions towards an ethical consensus regarding the field of motion capture acting.
Abstract: We discus Gandy machines. We are concerned with the empirical question as to whether and to what extent these machines, construed as a mathematical structures, actually represent local physical situations. In order to do this it is necessary to give a precise definition of the notion of realization of a Turing-computable algorithm into a physical situation as well as a precise analysis of the property of locality. We show that there is an inaccuracy in Gandy’s analysis, since in it the transmission speed can be greater than that of light, and to discuss Gandy machines within the context of quantum physics. Finally we find it worthwhile to ask whether a quantum machine is really a Gandy machine.
Abstract: The intuition that “the mind is software” is widespread in both philosophy and cognitive science. Danks (2008) recently developed an account, grounded in this idea, of intertheoretic relations for cognitive theories. Specifically, the proposal was that cognitive models are connected through a relation of compilation or implementation, where this relation is the one found in computer science. In this paper, I argue that this idea faces three substantive challenges—two grounded in practice and one in theory—to the widespread use of this relation. Although some cognitive models might be connected through compilation or implementation, this relation fails to capture critical aspects of the complex relations that can obtain between non-competing cognitive theories.
Abstract: Multiple governments and agencies reportedly engage in large-scale electronic data collection and surveillance. These programs have been heavily criticized, largely on principled grounds (e.g., that they violate privacy rights). This paper instead argues against such practices on purely pragmatic grounds. I present a model of the relevant costs and benefits and show that, for many parameter values that approximate real-world situations, targeted data collection and surveillance is pragmatically or rationally preferable to large-scale programs. Although massive data collection can identify individuals who would otherwise have been missed, it also typically produces many “false positives” that can impose significant costs. As a result, these programs are frequently pragmatically suboptimal, regardless of one’s views about their legality or morality.
Abstract: 3D printing is a process of producing solid objects of various shapes (e.g., spare plastic parts for cars) from a digital model by adding successive layers of material. More recently, this technology has been used for producing living tissues and organs. It provides another avenue to analyze the increasingly informational nature of physical objects and the ethical challenges it brings. It uses both specific information provided by the “digital model”, and the instructional information of its printing program. While bioprinting holds promise to alleviate shortages of certain biological tissues, in this paper we begin to address ethical concerns that arise from the possible avenues of exploiting this information and questions about ownership of and accessibility to this information.
Abstract: The socio-technical system of a robot, including the software, matters. Context is central to analyzing the ethical implications for the public, and the ethical responsibilities of the developers of that software. In this work we explore the impact that the software license has for software that controls the behavior of robots. We focus on the nature of software that most directly controls the most ethically significant aspects of the robot, and develop an argument that software relevant to these aspects ought to be open source software.
Abstract: Facial masking in early stage Parkinson's patients results in a stigmatization of the patient caregiver relationship. We are investigating the role of a robot co-mediator that can potentially reflect the internal affective state of the patient to the caregiver using nonverbal communication. To do this for the triadic human-robot-human relationship, the robot may need to maintain a theory of mind model of each user in order to detect lack of congruence in the belief states of each individual regarding the emotional state of the caregiver. We explore several alternatives ranging from the null hypothesis (Theory of Mind is not needed at all), to interdependence and game-theoretic approaches, to direct representations of affect of each individual.
Abstract: This paper explores the possibilities that online education, in particular massive open online courses (MOOCs) can open up new forums for what Paulo Freire suggests is the true purpose of authentic education: conscientization, the ability to recognize and throw off that false narratives of dominant social groups. MOOCs can actually move along (at least two) trajectories. One where a centralized hub controls dissemination of information leading to concentrations of power across a broad ecological landscape, an almost Foucaultian nightmare. But MOOCs are also capable of creating non-hierarchical, non-linear educational forums where communities of learners are able to share and compare praxis leading to opportunities for conscientization that are rarely found in traditional educational contexts.
Abstract: In this paper I draw attention to certain similarities between the notion of autopoiesis (Maturana & Varela 1980) and the mechanistic account of computation (Piccinini 2007), with the primary aim of clarifying aspects of the latter. In particular, the role played by input/output components in a computing mechanism closely resembles the relationship between an autopoietic system and its environment, leading to a convergence in the non-representational status of both kinds of system. There is also a potential application to the study of cognition.
Abstract: Experiments in computing share many characteristics with the traditional experimental method but also present significant differences from a practical perspective, due to their aim at producing artefacts.
The central role played by the human actors (e.g.: programmers, project teams, software houses) involved in the artefact creation process calls for an extension of the relevant conceptual framework in a socio-technical perspective.
By analysing the most significant experiments in the subfield of Software Engineering, we aim at showing how the notion of control, one of the pillars of the experimental method, needs to be revised: it should be intended in a posteriori form, in opposition to the a priori form exerted in traditional experimental contexts.
Abstract: The works of two authors, Luciano Floridi (1964-) and Vilém Flusser (1920-1991)are compared to show the significant similarity of fields of interest during distinctively different time zones. While the two come from very different backgrounds (Philosophy of Information and Philosophy of Communication, respectively), or they have very diverse writing styles (academic/essayistic), they have come up with terms that indicate the same phenomena. Floridi talks about inforgs living in hyper-history after information revolution, and calls for a new e-nvironmental information ethics – with the usage of Game Theory models. Flusser claims that post-history is the result of information revolution, and that the playful beings of the future should focus on information as the new ecological value.
Abstract: In a recent paper by Knobe & Szabo, it is suggested that asymmetries in human moral judgment may be the result of how the mind entertains the different ways things could have gone when we encounter a morally-charged situation. Their thesis roughly states that our moral judgments are deeply connected to our modal psychology. In this paper I argue for an approach to building moral machines grounded in computational cognitive architecture (CCA), specifically starting with aspects of CCA involved in modal representation and reasoning, giving a rough sketch of how an appropriately rich knowledge representation and reasoning framework is able to capture the dynamics of human moral judgment in one of Knobe & Szabo’s examples.
Abstract: There are different ways of being an eliminativist in the philosophy of mind and the philosophy of computational cognitive modeling. There are also different ways of conceiving of the role that linguistic structures or language-like structures can play in computational cognitive modeling. This paper will distinguish between folk psychological eliminativism and intentional/representational eliminativism; it will also differentiate these from hypotheses about the role that language may or may not play in cognition. The purpose of all this will be to disentangle issues of eliminativism and languages of thought to arrive at greater clarity on the options available for computational cognitive modeling.
Abstract: We introduce an interdisciplinary research project that bridges research in computer science and philosophy in the field of activity recognition. Specifically investigated here is the technology of wearable activity recognition, including its usage in everyday contexts and its design practice. Our core idea is that using such technology has two dimensions, an individual one and a social one, and inadvertently effects the way of acting. We assume that activity recognition is providing a new perspective on human actions; this perspective is mediated by the recognition process, which includes the chosen recognition model and algorithms, and the visualisation of the results. Concerning the type of application, our focus will be on self-reflection or self-management, for instance in health behaviour change or collecting activity data for self-knowledge.
Abstract: The goal of this work is to sketch a general strategy for a theory of aesthetics that is grounded in information-theoretic concepts and sound a call for action to develop a more fully formalized version of such an aesthetics. Such a theory is seen as necessary in light of the ethical theory of Floridi (2013), which calls for human beings to engage in poietic management of the infosphere. The starting-point of this new aesthetics is a histogram with Shannon Entropy on the x axis and Kolmogorov Complexity on the y axis. Content, specified at an exact Level of Abstraction, can be mapped on to such a histogram to create a structure for analyzing a given range of content.
Abstract: Computer simulations constitute a significant scientific tool for promoting scientific understanding of natural phenomena and dynamic processes. Significant leaps in computational force and software engineering methodologies now allow the design and development of large-scale biological models, which– when combined with advanced graphics tools– may produce realistic biological scenarios, that reveal new scientific explanations and knowledge about real life phenomena. A state-of-the-art simulation system termed Reactive Animation (RA) will serve as a study case to examine the contemporary philosophical debate on the scientific value of simulations, as we demonstrate its ability to form a scientific explanation of natural phenomena and to generate new emergent behaviors, making possible a prediction or hypothesis about the equivalent real-life phenomena.
Abstract: Resilience is becoming an important and alternative response to provision of services in the state sector. In Information Systems development, resilience has often been treated as non-functional requirement such as scalability and little or no work is aimed at building resilience in end-users through systems development. In this paper we introduce a refinement of the value sensitive action-reflection model used in co-design, first introduced by Yoo et al, that recognizes the tension between values and resilience. We report on our activities of using this approach for a project aimed at developing mobile apps for promoting better engagement between young people and their case workers in the UK youth justice system. We examine the ambiguity created when designer and stakeholder prompts change their role and purpose during the co-design process and discuss the impact of this on resilience building for the end-user. From a methodological perspective, our study suggests, value-based co-design methodology calls for a ground revision of traditional design philosophies Their syntactic-semantic-experience distinction is too limited to account for co-design interactions based on values leading to implications for Information Systems.
Abstract: The aim of this paper is to address a particular variation of the Many World Interpretation of quantum computation (MWI) showing that one of its main claims, namely the fact that quantum computation relies on quantum parallelism, can not be supported by the Quantum Parallelism Thesis (QPT) and the Physical Information Thesis (PIT) at the same time. I will suggest that as long as this variation states both PIT and QPT can not furnish a physical explanation of the latter unless it is already stating the truth of MWI. This specific case suggests the more general conclusion that MWI of quantum computation and other accounts instill very different concepts of information.
Abstract: Computational modeling plays a special role in contemporary cognitive science. The now dominating methodology has turned out to be fruitful, even if his three-level account is not without problems. My goal is to offer a descriptive account, which is close in spirit to the recent developments in the theory of mechanistic explanation. The claim that computational explanation is best understood as mechanistic gains popularity. The mechanistic account of computational explanation preserves the insights of Marr but is more flexible when applied to complex hierarchical systems. It may help integrate various different models in a single explanation. By subsuming computational explanation under causal explanation, the mechanistic account is naturally complemented by methodology of causal explanation.
No theory of mental representation is presupposed in the present account of computation; representation is one of the most contentious issues in contemporary cognitive science. Assuming one particular theory of representation as implied by computation would make other accounts immediately non-computational, which is absurd. Another reason is that mechanistic accounts of computation do not need to presuppose representation, though they do not exclude the representational character of some of the information being processed. Only the notion of information (in the information-theoretic sense, not in the semantic sense, which is controversial) is implied by the notion of computation (or information-processing).
Abstract: This paper proposes a method of examining large informational bodies through the eyes of an information consumer. There is a well-studied dichotomy between the random and the predictable, but autonomous information consumers tend not to enjoy information targeted to either extreme. Instead, we are drawn to information satisfying some much more complicated criterion. The analysis proposed in this paper exploits the notion of scale to gain perspective on the self-reflective property of an informational consumer. By defining a quantity called novelty, we can capture many of the properties of the commonsense notion of the word. The paper applies this analysis to narrative, in an attempt to demonstrate how mathematics can reveal aspects of the nature of creativity and intellectual motivation.
Abstract: Learning is most often treated as a psychologically rich process, in the extant literature. This in turn, has a number of negative implications for clustering in machine learning, since psychologically rich processes are in principle harder to model. In this paper, I start by examining the minimum resources required for learning (as well as for classification and categorization) in the human mind. I explain that these processes are greatly facilitated by top-down effects in perception and argue that by modeling the processes responsible for these effects would render clustering simpler – than it is often construed – as well as more effective. Crucially, the required resources in question are hardwired low-level resources and are thus easier to model. These low-level processes play a crucial role in what I call the process of ‘internalizing supervision’. In ‘internalized supervision’ stored information influences currently represented information, allowing an artificial system to perform clustering without being externally supervised. Internalized supervision builds upon a notion of similarity that not only exploits the learner’s hardwired pattern recognition abilities, but also allows perception to deploy ‘memory’ or stored information without being desperately cognitively mediated
Abstract: Several formal definitions of computability, which are thought to correspond satisfactorily to the intuitive concept of effective computation have been proposed in the 1930s. Although such definitions have proven to be equivalent, two different approaches of computability may be distinguished with respect to the relationship between human computers and computable functions. While the first approach is usually regarded as the standard approach of computability in that it does not take into consideration the concept of a human computer in the process of computing functions, the second approach -- that will called the epistemic approach -- focuses on the concept of a human computer for characterizing computability. In addition to presenting these approaches, I shall claim they have deep conceptual implications for the set of computable functions depending whether logical or physical computability is considered. From a logical point of view, the standard approach is required for protecting the Church-Turing thesis from epistemic arguments based on the description of a recursive function which is not computable -- as the one described in (Bowie, 1973). From a physical point of view, however, the epistemic approach -- if strengthened by new epistemic constraints as those described in (Piccinini, 2011) -- may preserve the physical Church-Turing thesis from counterexamples such as random processess.
It is argued that computing science should be studied as two sciences separate from the study of computers as tools and separate from the study of building computers.
This change to computing as science will allow methodological comparison and experimental testing of computational aspects of reality that is not possible if computing remains an engineering discipline. The two areas are study of data as dataology following Peter Naur's ideas and computing as a sub area of physics that allows studying aspects of calculating following Richard Feynman's ideas and earlier ideas of Max Planck and Albert Einstein. Computing study then becomes similar to Greek style science versus current Roman style engineering.
Abstract: This paper argues that natural selection—indeed selectional processes generally—can successfully be analyzed in terms of mathematical or computational procedures, thus supporting the mathematical or informational turn in biology. Construing selectional processes in mathematical terms demands they be thought of as governed by a logic, rather than as controlled by causal principles. (Indeed they must be so construed if the evolutionary process is to be modeled in terms of a code passed from one generation to the next. If biology is to take the informational turn, then, natural selection must be disentangled from its physical manifestations.) The mathematical construal of a selectional process is superior to its construal in terms of (causal) mechanisms in at least one regard: it admits clearer treatment of the question of whether (natural) selection is decidable in the computational sense. The present paper suggests there are quite strong reasons for maintaining that natural selection is in fact NOT decidable, particularly if we follow Valiant (2009)’s analysis of natural selection as a search procedure of a character space. Natural selection, should instead be construed as governed by the logic of competition, premised on the deeper logic of scarcity. This paper makes a beginning at articulating that logic.
The semantics of the Turing Test offer an enduring quandary: between the influential argument from Searle (1980) about the kind of test envisioned by Turing (1950) never detecting semantic processing and responses in the manner of Harnad (1991) or Schweitzer (2012) calling for increasingly stringent requirements to address that issue, Turing’s test seems either very weak, or infeasible to implement.
Balancing between these two concerns, I offer a middle-ground approach for making a strong Turing Test with relatively few demands, trading on Turing’s use of humans as a measuring instrument and involving the methods of experimental psycholinguistics to test for the human-AI behavioural parity that is the signature of the Turing Test, while reasonably satisfying concerns about semantic lineage.
Abstract: In recent decades, software development methodologies have found more importance in software engineering whereas methodology, as an old and central branch, in philosophy of science has matured enough to share its experiences. Software engineering, as a particular discipline, and philosophy of science, as a general field, can cooperate to do their best in methodology studies. We try here to show some major differences and similarities between methodology in philosophy of science and software engineering, to have a better prospect for future studies. And as an example, in meta-methodology, we use some philosophical insights which gained by “naturalistic” approaches, to make a concrete instance for this mutual relationship.
Following McGinn*, assume physicalism and let P be the property of the brain causally responsible for phenomenal consciousness. Reflecting on P, McGinn concludes that we are driven to mysterianism, or the thesis that we cannot solve the hard problem of phenomenal consciousness because we are suffer cognitive closure with respect to P. That is, our conceptual resources are limited in such a way that we are necessarily blind to P even though P is the very property making possible our own phenomenal consciousness. Being essentially blind to P, we cannot provide a theory of the physical basis of phenomenal states. Absent a theory of the physical basis of phenomenal states, the natural explanation solving the hard problem of phenomenal consciousness--in spite of its existence--is necessarily inaccessible.
Our continuing failure thus far to provide such an explanation at least raises the specter that we are cognitively closed to P. That the failures have been abjectly dismal (and the problem hard in such a way that we cannot even envision how conceptual or empirical investigation might solve it) solidifies the specter of cognitive closure enough to make it worrisome for optimists. McGinn, however, goes further than observing our puzzling lack of progress and gives an argument for cognitive closure.
Briefly, the facts of phenomenal consciousness are available by introspection alone, while the facts of brain states are available by sensory perception alone. That is, we cannot introspectively grasp brain states any more than we can perceive phenomenal states by our senses. Thus P escapes us working from the top down since no amount of introspection can reveal the brain states constituting P, yet no property working from the bottom up can be identified as a candidate P by brain scans since every mere brain property is spatially located and extended in sensory perception. The spatial properties attending sensory perception of brain states, however, make them not merely ill-suited to explaining phenomenal consciousness: The spatial nature of sense perception is incompatible with explaining consciousness. Thus, to paraphrase McGinn, phenomenal consciousness is fully noumenal with respect to brain studies. New and improved brain scans offer no hope of breaching the gap between the noumenal explanandum, phenomenal consciousness, and the explanans, P.
We are cognitively closed to P inasmuch as P is neither introspectively available nor identifiable in sensory perception, yet we are cognitively confined to introspection and sensory perception. For us, the natural explanation of phenomenal consciousness occupies an epistemic occlusion. Lacking the right sort of conceptual resources, phenomenal consciousness is thus essentially mysterious to creatures like us. Our inclination to invoke supernatural explanations (souls) or at least lavishly enrich natural resources (epiphenomenalism or pan-psychism) is understandable if unjustifiable.
McGinn has been rightly criticized** for asserting that we can fully understand a problem while wholly lacking the capacity to grasp solutions to it, since understanding a problem seems to require at a minimum a conception of possible solutions. Nevertheless, in this paper I argue that cognitive closure with respect to the neural substrate of phenomenal consciousness is a necessary outcome of the complexity constraining the cognition of any creature whatsoever. That is, while computability theory shows that there are at most countably many computable functions despite there being uncountably many functions--the halting problem being a case in point--complexity theory demonstrates further limitations on computation that are too often ignored in developing computational theories of mind. For us, the brain is necessarily a black box (unavailable to introspection in just the way hands, feet, and eyes are not) as a result of the complexity problem posed by being conscious of the neural basis of conscious states. Introspection does not reveal P because it could not be consciously available, and it cannot be consciously available because being so would recursively exhaust any and all computational resources. The dubious possibility of hypercomputational beings notwithstanding, the crux of my argument is showing that temporal complexity constraints on computation epistemically occlude the solution of the problem of phenomenal consciousness from introspection. Complexity theory precludes top-down solutions, in short.
Yet, contra McGinn and his hand-waving towards the spatial character of sensory perception***, I argue that these same complexity constraints point the way towards a bottom-up solution. That is, far from being an argument against them, the occlusion rendered by complexity constraints on cognition ironically serves to confirm computational theories of mind: What we seek in our brain studies are just those neurological processes that trigger on or reflect other neurological processes but are not themselves triggering or reflective events.
*McGinn, C. 1989, "Can We Solve the Mind-Body Problem?", in Mind 98: 349-66.
**Kriegel, U. 2003, "The New Mysterianism and the Thesis of Cognitive Closure" in Acta Analytica, 18: 177-91.
***McGinn, C. 1995, "Consciousness and Space", in Journal of Consciousness Studies 2: 220-30.
Abstract: This paper applies the algebraic Theory of Institutions to the study of the structure of semantic theories over modular computational systems. A semantic theory of a program’s module is first provided in terms of the set of Σ-models which are mapped from a category Th of Σ-theories and generate a hierarchy of structures from an abstract model to a concrete model of data. The collection of abstract models representing different modules of a program is formalised as the category of institutions INS, where theory morphisms express refinements, integrations, and compositions between couples of modules. Finally, it is required that a morphism in INS at any level occurs iff the same morphism occurs at the lower level alongside the Th hierarchy.
Abstract: A fundamental question concerns the criteria under which a physical system can be said to implement an abstract computational procedure. I advocate a Simple Mapping Account (SMA) under which computation is not intrinsic to physical systems, but rather is an observer-dependent interpretation. The SMA has come into conflict with the Computational Theory of Mind (CTM), and I argue that the underlying issue concerns the Computational Sufficiency Thesis CST), and that CST should be rejected. I propose a more scientifically plausible version of CTM wherein the interpretation of the brain as a computational device supplies the high level organizational key for predicting both future brain states viewed as implementations of abstract computational states, and output behaviour viewed in cognitive terms.
Abstract: There is a plethora of computer ethics textbooks currently on the market, many of which have been authored by the IACAP community. The currently existing textbooks differ quite substantially in style, approach and selection of topics, but they all have one thing in common: they are written for a presumed non-savvy audience, which requires more or less elementary technological details to be explained. This is laudable when it comes to educating non-savvy people about the ethical challenges brought about by ICT, but quickly generates annoyance among the tech-savvy audience. Indeed, tech-savvy students often regard explanations of what is to them obvious as insulting, and signifying an author that is not in tune with their actual concerns – not understanding that such explanations are necessary for those who study computer ethics without being knowledgeable about ICT. This is unfortunate, because it is of utmost importance to efficiently teach computer ethics to the computer scientists and engineers of tomorrow – they are the ones who can actually predict and prevent ethically problematic scenarios from happening. This group of students deserve a book that is tailored to their needs and background, one that takes it for granted that they know much of the technological background and therefore can focus exclusively on ethical, political and societal issues – there is a need for a computer ethics textbook for digital natives. With this presentation, we seek to draw on the expertise from both textbook authors and educators in the audience.
Abstract: Talking about models of cognition, the very mention of “computationalism” often incites reactions against Turing machine model of the brain and perceived determinism of the computational model. Neither of those two objections affects models based on natural computation or computing nature where model of computation is broader than deterministic symbol manipulation. Computing nature consists of physical structures that form levels of organization, on which computation processes differ, from quantum level up. It has been argued that on the lower levels of organization finite automata or Turing machines might be adequate, while on the level of the whole-brain non-Turing computation is necessary, according to Andre Ehresmann (Ehresmann, 2012) and Subrata Ghosh et al. (Ghosh et al., 2014)
Abstract: To characterize the representational (information, cognitive, cultural, communication) technologies in the Internet age, I suggest that Aristotle’s dualistic ontological system (which distinguishes between actual and potential being) be complemented with a third form of being: virtuality. Virtuality is reality with a measure, a reality which has no absolute character, but which has a relative nature. This situation can remind us the emergence of probability in the 17th century: then the concept of certainty, now the concept of reality is reconsidered and relativized. Now in the description of the world created by representational technologies there are two worldviews with different ontologies: this world is inhabited by (absolute) real and (absolute) potential beings – or all the beings in this world are virtual.
This paper theorises the culture of remix and reuse in a contemporary data-saturated society from a practice-based phenomenological perspective by studying how the materials in a number of animation archives housed at the University for the Creative Arts are approached and treated. Remixing and reuse the archival materials not only widens access to the animation archives, but also transforms the learning activities. In so doing, it enhances student-centered flexible learning, and encourages students to contribute to sustainable cultural heritage.
The analytical framework will be based on Callon's 'sociology of translation' (1986, Callon et al., 1983) and Callon and Law's concept of 'interests and enrolment' (1982) to understand how the archival materials are 'domesticated', '(re-)appropriated', re-contextualised in the archive visitors's works (or influence the visitors's work. We will identify the relationships between the actors (the animators) and the archival relationships, the possibility of interaction (e.g., the materiality of animation, and production labours), discuss how the margins of manoeuvre are negotiated and delimited (ibid.: 68). For example, materiality of animation and production labours can be explored by asking how can/do these knowledge’s continue to inform the types of animation films made by practitioners. More broadly, we wish to consider how interactions between humans and technologies change behaviours and can be appropriated in different situations, how the body is engaged in those endeavours of shifting modes and materials of production, and how these differentials impact on the type of work that is made.
Abstract: Selected philosophical issues arising from computational scientific discovery aimed at semi-automatically generating theories in cognitive science are examined. Theories can be produced in new ways by generating them semi-automatically through genetic programming. Cognitive science theories can be represented as programs of this kind and modified as programs evolve. The possibility of equating information and the genetic programming simulation of it has lead some researchers in computational cognitive science to claim that computational simulation of cognition is actually cognition. However, this methodology is problematic so it is preferable to sharply demarcate between genetic programs and cognitive science theories. The latter approach is valuable as it encourages clear formulation of cognitive science theories and gives indications about where revision might be necessary.
What is intelligence? Is it something that we measure when we conduct so-called IQ tests? Is it something that, no matter how it gets measured, is uniquely human? Does some form of the Turing test, an alleged indicator of the presence of machine intelligence, provide some help in answering the original question? Is there such a thing called ‘emotional intelligence’? If so, how is it related to traditional, i.e. non-emotional, intelligence? Much disagreement surrounds these and other related questions. In this talk, I address the first and most central of these questions by focusing on two traits that, as I argue, are ubiquitous in behaviour that we intuitively deem as intelligent, namely success in problem-solving and portability. I argue for a specific articulation of these traits and conclude that a conception of intelligence with this articulation at its foundations makes some headway in understanding the phenomenon under study better.
Several conceptions of intelligence have been proposed over the years – see Sternberg and Detterman (1986) for examples. No specific conception seems to have been widely accepted, however, and, as a consequence, recent attempts have been less committal about the details of what intelligence means and more inclined to emphasise the non-triviality of associated projects like how to best measure intelligence. This is no more clearly evident than in the so-called ‘mainstream science on intelligence’ movement, whose public statement (signed by fifty-two researchers) defends the use of intelligence tests but falls short of specifying exactly what intelligence amounts to. Instead, the statement exhibits a reluctance to be drawn into any substantive details of what intelligence means. Intelligence, we are told, “is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience” (Gottfredson 1997: 13).
Although holding back in detail, this characterisation of intelligence contains the seeds for a more thoroughgoing approach to the question of intelligence. Take problem-solving. Gardner (1983) puts this trait at the centre of his conception of intelligence. And with good reason or so I will argue. Any set of circumstances that require the application of intelligent behaviour to bring about some desired result can be adequately be re-described in terms of a problem-solving activity. Take, for example, the abilities of planning, of abstract or of complex thinking, which in the aforementioned statement are listed as distinct from each other but also from problem-solving. Any instance of these abilities can be adequately re-described in terms of a problem-solving activity. Thus, planning a trip from A to B can be re-described as involving (i) the problem of how can one get to B given that they start from A and have certain resources at their disposal and (ii) a non-empty set of solutions that contains at least one feasible route from A to B and only makes use of the available resources. This method of re-description obviously generalises. In any given case we identify the problem as the question of how to proceed from a set of circumstances to a desired result and a solution as the behaviour that one can employ to achieve the desired result. To give more richness to our framework, we may also employ a notion of resource-efficiency that allows for at least a partial ordering of the efficiency of solutions for a given problem. With this articulation in mind, one can put forth various intelligence-related conjectures. For example, an agent X1 is strictly more intelligent than an agent X2 with respect to a type of problem p if X1 always arrives at a more resource-efficient solution to a token of p than X2.
Beyond the ability to solve problems there is also another trait that in my view is of paramount importance in characterising intelligent behaviour. I call this trait ‘portability’. Roughly put, it is the ability to find one or more solutions to a wide range of problems. (Similar ideas have been floated in the literature on intelligence, e.g. Sternberg and Salter’s (1982) idea of ‘adaptive behaviour’). The greater the range, the more portable the ability. Part of the reason why, at the time of writing, artificial intelligence has limited success in passing something like the Turing test is the fact that its various incarnations are good, and sometimes even better, at solving some but not all problems that human beings are able to solve. A truly portable problem-solving machine, or at least one that’s as portable as a human being, would presumably pass the Turing test with flying colours. That’s because one type of problem-solving case involves natural language question-and-answer games. The problem in this type of case is how to proceed from the given conversational context to formulate a human-like answer to the given question in the Turing test and a solution is the verbal answer that can be employed to achieve the desired deception. Once again, we can employ this articulation of the notion of portability to put forth various conjectures that are tied to intelligence. Modifying the conjecture cited above, we can, for example, say that a (biological or artificial) agent X1 is strictly more intelligent than a (biological or artificial) agent X2 if, and only if, for any type of problem p, X1 always arrives at a more resource-efficient solution to a token of p than X2.
Gardner, H. (1983) Frames of mind: The theory of multiple intelligences, New York: Basic Books.
Gottfredson, L.S. (1997) ‘Mainstream science on intelligence’, Intelligence, vol. 24(1): 13-23.
Sternberg, R.J. and D.K. Detterman (eds.) (1986) What is intelligence? Contemporary viewpoints on its nature and definition, Norwood, NJ: Ablex.
Sternberg, R.J. and W. Salter (1982) ‘Conceptions of intelligence’, in R. J. Sternberg (ed.), Handbook of human intelligence, Cambridge: Cambridge University Press, pp. 3-28.
Abstract: The paper discusses Antonio Damasio's understanding of higher-level neurological and psychological functions in __Self Comes to Mind__ (2010) and argues that the distinction he posits between regulatory (homeostatic) physiological structures and non-regulatory higher-level structures such as drives, motivations (and, ultimately, consciousness) presents philosophical and technical problems. The paper suggests that a purely regulatory understanding of drives and motivations (and, consequently, of cognition as well) as higher-order regulations could provide a unified theoretical framework capable of overcoming the old split between cognition and homeostasis that keeps resurfacing, under different guises, in the technical as well as in the non-technical understandings of consciousness and associated concepts.
Abstract: In this paper I argue that human beings should reason, not in accordance with classical logic, but in accordance with a weaker ‘reticent logic’. I describe such a reticent logic, and then show that arguments for the existence of fundamental Gödelian limitations on artificial intelligence are undermined by the idea that we should reason reticently, not classically.
Abstract: Every intelligent autonomous agent must presumably employ some form of conditional reasoning if it is to thrive in its environment. Even the first author’s dog seems aided by the faculty to carry out such reasoning. For suppose Rupert is called from the tub to come over. He certainly seems to know that his coming (χ), combined with the conditional that if he does he’s likely to endure his least favorite chore (geting a bath = β), implies that he will receive a bath. Isn’t that why he looks at his master and the tub from a distance, and stays right where he is? Unfortunately, as is well known, it has proved remarkably difficult to suitably model, formally, conditional reasoning used by intelligent autonomous agents of the biological variety, so that insight can be gained as to how one of the silicon variety, e.g. a smart robot, can be engineered.