EBICC New Header

10th International Brazilian Meeting on Cognitive Science

Round Table 1: Computational models for simulating cognition and behavior: reflections on its methods, scope, and underlying epistemology

December 7, 2015 – 2:00 PM

Theme 1 -On the Different Sorts of Artificial Intelligence: Deep, Shallow and Mimicking
Presenter – Flavio Soares Correa da Silva
(Department of Computer Science – IME – University of São Paulo)


The field of Artificial Intelligence (AI) as we know it today started in 1956 in the United States, as an initiative of J. McCarthy and colleagues. It was planned to be multidisciplinary and has always aimed at the study of Intelligence through its reconstruction in human designed platforms. Around the same period and location, the field of Human-Computer Interaction (HCI) started to be structured around the notion of Human Augmentation proposed by D. Engelbart, among other scholars. For several years, AI and HCI were developed by separate, nearly disjoint research communities, carrying conflicting views about the most appropriate ways to bring together humans and digital/computational machines. Along the history of AI, some initiatives have taken the road of foundational scientific endeavor, while other have focused on the appropriation of techniques inspired by biological phenomena to design and build useful artifacts. Within the AI research community, some scholars have named the former initiatives Deep AI, and the latter Shallow AI. Deep AI refers to attempts and initiatives to isolate Intelligence as an observable phenomenon, and to build a deeper and broader understanding of this phenomenon through model building and simulation in man-made substrates. Shallow AI, in contrast, emphasizes the potentialities of techniques that emerge as a result of observing biological systems whose behavior can be deemed intelligent, as tools to build artifacts that, based on some appropriate metrics, can be considered improvements on previously existing artifacts for similar functionalities. A third possible road, in which digital artifacts could be designed to mimic intelligent behavior convincingly, was initially viewed as a road to be avoided, as the results of initiatives built this way could be taken as forgery and potentially unethical behavior. Recently, however, interesting methodological evolutions have taken place, aligning the fields of AI and HCI and clarifying that Deep, Shallow and Mimicked AI were not, in fact, as different as initially considered. By taking a more encompassing (and hence significantly more challenging) design stance, one can consider the design of social systems comprised by digital/computational artifacts as well as human participants. Such systems – frequently coined Social Machines – should be structured in such way that digital components are programmed and human components are incentivized effectively to cooperate. In order to build high quality social machines, one needs to build appropriate interaction networks and protocols; program the behavior of digital components based on well grounded (deep) models of intelligence; make sure that these models are computationally efficient (hence implemented according to the precepts of good shallow models of intelligence) and, finally, make sure that digital components can be perceived by human components as intelligent, so that social interactions can occur with the required fluidity. As a consequence, mimicked intelligence has been accepted as a third facet of intelligence that is required to build digital devices which may deserve to be accepted as intelligent in social interactions.


Theme 2 –Emergent Signs, Enactive Cognition and Complex Systems
Presenter – Leonardo Lana de Carvalho
(PPG Ciencias Humanas, Universidade Federal do Vale do Jequitinhonha e Mucuri, Diamantina, MG)


We emphasize that any form of naturalization of phenomenology will be widely different from pure phenomenology. The concept of enaction would provide a natural alternative, strongly connected with biological scientific thinking and inspired by phenomenology, to explain cognition. The living being is presented to be a producer of itself. This is only possible with beings that produce the conditions of their own existence. The environment modifies or perturbs a structure whose function keeps this structure. In this sense, the living entity is described as a history of perpetuating in the world that takes place through its structural coupling and its operational closure. The evolution of the species occurs by means of a natural drift. The source of intelligence is the body in action, and we stress that the nature of cognition is to be in action (“en acción”). If a system is self-organized, structurally coupled in an environment, their actions are adaptive; these actions are intelligent in this environment. In this sense the theory of enaction do not need the concept of a “res cogitans” or “mental representation” to explain cognition. On the other hand, forms of material representations have been proposed by other theories in cognitive science with great success in modeling and synthesis of intelligent systems. The purpose of this paper is to defend a promising possibility of theoretical and practical alliance between the enactive theory of cognition and consistent notions with this theory of “information”, “representation”, “sign”, etc. In our view, the key concept of this alliance is that of the emergent sign. The enactive approach received significant influence from connectionism, especially regarding the concepts of self-organization and emergent properties. The connectionism has sought solution to the problem with the concepts of microrepresentations (material symbols manipulated by the machine) and macrorepresentations (emerging patterns from the material symbolic activity in interaction with the environment). Maturana and Varela argue in this sense that “… interactions (once recurring) between unity and an environment consist of reciprocal disturbances. In these interactions, the environmental structure only triggers structural changes on autopoietic units (do not determine or inform), and vice versa for the environment.” We argue that the main influence of enactive theory in computational thinking is the renewal of artificial intelligence that explores the concept of enactive theory. However, the overcoming of the “problem” of “enaction” versus “representation” means the introduction of a new paradigm in cognitive science, the complex systems paradigm to cognition. Steels begin to signal the transition from the enactive theory to the complex system theory of cognition. His work on the development of language, such as “Language as a Complex Adaptive System” from 2000, presents language as emerging from a complex network of interactions, conceived from the interaction of agents with their environment. We understand that another important article was published in 2003, titled “Intelligence with representation”. In this paper, the author opposes Brooks, explaining that a semiotic notion of representation should be maintained. Mitchell (1998), in the article “A complex-systems perspective on the ‘computation vs. dynamics’ debate in cognitive science” argues that; “Most of these theories assume that information processing consists of the manipulation of explicit, static symbols rather than the autonomous interaction of emergent, active ones.” We argue that enactive cognitive agency must contain an algorithm that should not be a reinforcement function, nor a problem-solving algorithm consisting on deduction and inference functions. Indeed, the construction of a world is sought as a way of being in the world.

Using Dreyfus’s term, a “skillful coping” algorithm, or an autolelic principle. The agent would not be getting an input i or a reinforcement s, but the inputs would be better described as perturbations. Our point is that these perturbations lead to an internal building B that is, from the perspective of the history of the system, the effect of the agent coupling with the environment. This kind of B building block can be useful to the agent for reprogramming itself, its own algorithm (self-programming). According to Rocha & Hordijk (2005), this B can serve to guide the development of complex adaptive systems, such as a biological organism that makes use of its genetic code to guide its development. According to Steels (2003), this B can also be useful to agent architectures as signs in semiotic relationships under the aegis of cross or multi- scale levels of structural coupling processes. We stress the importance of enactive approach in the design of agents and agents as artificial autopoietic beings, understanding that previous approaches have very different cognitive architectures and that a prototypical model of enactive cognitive architecture is one of the major challenges today. Indeed, this is a sensitive matter and we would not have the space here to address this issue properly. However, we would like to notice that nowadays, it is an aspect that divides the community in embodied cognitive sciences, and it may even be signalling a transition to a complex systems theory of cognition. Crutchfield (1994) understands that new machine models are required to investigate the emergence and complex systems. According to the author, the complex systems approach of the computing machine consists of a particular notion of structure. The complex machine structure would be based on a “nonlinear mechanical computing processes”. This malleable structure can be modified by means of mechanisms for transformation of the structure. These mechanisms of transformation would lead to a constant “reconstruction of the hierarchical machine” by itself. To connect the structural reconstruction processes, Crutchfield provides “evolutionary mechanics”. Then, he suggests that this complex machine should be the standard model for the study of complex systems and emergence. Conclusively, we support that any cognitive agency to have enactive bases must actually conceive agent’s structures as coupled to the environment. An autopoietic machine should be able to pass through natural drift. However, the constructions of complex machines need a coherent theory assimilating the concepts of enaction and material representation. We think that this theory is based on the concept of emergent signs or similar notions. Following Fodor (2000), perhaps the investigation of this reality is not interesting for some cognitive engineers. However, this research is profoundly important to cognitive science and philosophy of the mind. We argue that technological applications will surpass the expectations.


Theme 3 –Algebraic Semiotics for Specification of Cognitive Aspects in Human-Computer Interaction
Presenter – Luciano Silva
(U.P. Mackenzie, São Paulo)


In Human-Computer Interaction (HCI), there is a constant need for understanding the mechanisms of human perception linked to the interaction process with computers, whose result may yield important information for specifying and building interfaces with better usability and learning measurements. If communication processes with the interface are not accordingly planned with the observation of human factors, one may generate common problems such as difficulty in locating desired tasks as well as a long time and way to complete them. For example, the presence of functions that are not used and others not available, joined to the difficulty to remember the route to the tasks may compromise indices associated with the evaluation of an interface. Techniques of Cognitive Sciences can be used for the improvement on interface Project. They provide a mental user model which can be exploited to observe the intensity of requests from processes cognitive derived from users (experience, interpretation, memory and learning). One of the recurring problems in using these mental

models is how to model them formally in such a way to promote their inclusion as components in the formal specification of an interface or in evaluation procedures. There are several approaches to this problem and Algebraic Semiotics has offered a viable environment not only for representing cognitive issues on interfaces but also to integrate them in evaluation procedures based on formal methods. Algebraic Semiotics provides a framework for quantitative and qualitative analysis of interfaces, design criteria for creating interfaces and a strong relation to dynamics algebraic semantics. Using a system of signs, the Algebraic Semiotics can address various cognitive aspects in an interface through precise algebraic definitions for sign system and representation, calculus of representation with laws about operations for combining representations and precise ways to compare quality of representations. Moreover, it is possible to extend the constructions of Semiotics Algebraic to include dynamic signs for user interaction (e.g. Hidden Algebra), combination of algebraic structures with Gibsonian affordances, narrative structures, social foundations, computational semiosis and choose ordering on representations.


Theme 4 –Explaining Psychological Phenomena: The Role of Experimental and Artificial Simulations
Presenter – Diego Zilio
(UFES – Federal University of Espírito Santo, Vitória, ES)


What is the role of simulation in explaining psychological phenomena? My goal in this talk is to discuss this question. I will start by analyzing the definition of “simulation” as representation through models. Two possible ways of simulating psychological phenomena arise from this definition: (a) simulation as experimental models usually adopted in experimental psychology in the study of human and non-human behavior; and (b) simulation as artificial models used in cognitive science aiming the implementation of cognitive processes in machines. Both alternatives will be discussed in the light of a biological oriented mechanistic conception of explanation. I will argue that experimental simulations are essential to the construction of psychological knowledge and must precede artificial simulation when possible. Artificial simulations, on the other hand, have at least two main functions: to contribute to the validation of the knowledge produced by experimental simulations and to create useful technologies aiming the resolution of human problems.