Cogprints: No conditions. Results ordered -Date, Title. 2018-01-17T14:27:16ZEPrintshttp://cogprints.org/images/sitelogo.gifhttp://cogprints.org/2015-10-06T12:09:46Z2015-10-06T12:09:46Zhttp://cogprints.org/id/eprint/9956This item is in the repository with the URL: http://cogprints.org/id/eprint/99562015-10-06T12:09:46ZTHE SPECIES PROBLEM AND ITS LOGIC: Inescapable Ambiguity and Framework-relativity
For more than fifty years, taxonomists have proposed numerous alternative definitions of species while they searched for a unique, comprehensive, and persuasive definition. This monograph shows that these efforts have been unnecessary, and indeed have provably been a pursuit of a will o' the wisp because they have failed to recognize the theoretical impossibility of what they seek to accomplish. A clear and rigorous understanding of the logic underlying species definition leads both to a recognition of the inescapable ambiguity that affects the definition of species, and to a framework-relative approach to species definition that is logically compelling, i.e., cannot not be accepted without inconsistency. An appendix reflects upon the conclusions reached, applying them in an intellectually whimsical taxonomic thought experiment that conjectures the possibility of an emerging new human species.Dr. Steven Bartlettsbartlet@willamette.edu2014-08-24T21:07:30Z2015-04-20T11:40:47Zhttp://cogprints.org/id/eprint/9756This item is in the repository with the URL: http://cogprints.org/id/eprint/97562014-08-24T21:07:30ZCo-Variations among Cognition, Cerebellar
Disorders and Cortical Areas With
Regional Glucose-Metabolic Activities in a
Homogeneous Sample with Uner Tan Syndrome:
Holistic Functioning of the Human BrainPatients with Uner Tan syndrome (UTS) exhibit habitual quadrupedal locomotion (QL), intellectual disability, dysarthric speech and truncal ataxia. Examination of cognitive ability in this syndrome has not yet been demonstrated in the scientific literature. Aims: (i) To analyze the cognitive abilities of the siblings with UTS; (ii) to assess the grade of their ataxia in relation to cerebellar disorders; (iii) to measure the metabolic activities of various cerebral regions in comparison with healthy individuals; (iv) to detect the interrelationships among all of the measured variables (IQ test scores, ataxia scores, cerebro-cerebellar areas and their metabolic activity levels) to reveal the holistic activity of the
brain. The Minimental State Examination (MMSE) and Wechsler Adult Intelligence Scale (WAIS-R) were applied to the affected cases and healthy subjects. Cerebellar disorders were assessed by the International Cooperative Ataxia Rating Scale (ICARS). Brain MRI scans were performed and cerebro-cerebellar areas were measured on MRI scans, including their metabolic activities (SUV), measured by positron emission tomography (PET) scanning. MMSE and WAIS-R scores both correlated with cerebro-cerebellar areas. Cerebello-vermial areas and their metabolic activities were significantly smaller in patients than in normal controls; areas of the remaining structures were not significantly different between patients and healthy subjects. Brain areas significantly inter-correlated: ICARS negatively correlated with WAIS-R,MMSE scores, SUV, and cerebro-cerebellar areas, which significantly correlated with each other. The results suggested (i) ICARS may not only be a test for cerebellar disorders, but also may be related to global functioning of all of the
cerebro-cerebellar regions; (ii) ICARS, WAIS-R and MMSE may be measures of emergent properties of the holistic
activity of the brain; (iii) the psychomotor disorders in UTS may be related to decreased brain metabolism.Prof. Dr. Uner Tanunertan37@yahoo.com2014-05-10T00:07:47Z2014-05-10T00:07:47Zhttp://cogprints.org/id/eprint/9225This item is in the repository with the URL: http://cogprints.org/id/eprint/92252014-05-10T00:07:47ZElements of dialectical contextualismIn what follows, I strive to present the elements of a philosophical doctrine, which can be defined as dialectical contextualism. I proceed first to define the elements of this doctrine: dualities and polar contraries, the principle of dialectical indifference and the one-sidedness bias. I emphasize then the special importance of this doctrine in one specific field of meta-philosophy: the methodology for solving philosophical paradoxes. Finally, I describe several applications of this methodology on the following paradoxes: Hempel's paradox, the surprise examination paradox and the Doomsday Argument.Dr Paul Franceschip.franceschi@univ-corse.fr2014-05-10T00:07:27Z2014-05-10T00:07:27Zhttp://cogprints.org/id/eprint/9223This item is in the repository with the URL: http://cogprints.org/id/eprint/92232014-05-10T00:07:27ZStructure and Dynamics in Implementation of ComputationsWithout a proper restriction on mappings, virtually any
system could be seen as implementing any computation. That
would not allow characterization of systems in terms of
implemented computations and is not compatible with a
computationalist philosophy of mind. Information-based criteria for independence of substates within structured states are proposed as a solution. Objections to the use of requirements for transitions in counterfactual states are addressed, in part using the partial-brain argument as a general counterargument to neural replacement arguments.Dr. Jacques Mallahjackmallah@yahoo.com2013-09-17T14:26:40Z2013-09-17T14:26:40Zhttp://cogprints.org/id/eprint/8983This item is in the repository with the URL: http://cogprints.org/id/eprint/89832013-09-17T14:26:40ZWhat is the maths differance The paper presents a new framework for the maths development which is called maths differance. There are three typical maths differaence: prove, axiom and shift, and also some others.Zhicheng Yustrongart@qq.com2012-04-25T12:30:10Z2012-04-25T12:30:10Zhttp://cogprints.org/id/eprint/8144This item is in the repository with the URL: http://cogprints.org/id/eprint/81442012-04-25T12:30:10ZQuantum Genetics and Quantum Automata Models of Quantum-Molecular Selection Processes Involved in the Evolution of Organisms and Species Previous theoretical or general approaches (Rosen, 1960; Shcherbik and Buchatsky, 2007) to the problems of Quantum Genetics and Molecular Evolution are considered in this article from the point of view of Quantum Automata Theory first published by the author in 1971 (Baianu,1971a, b) , and further developed in several recent articles (Baianu, 1977, 1983, 1987, 2004, 2011).The representation of genomes and Interactome networks in categories of many-valued logic LMn –algebras that are naturally transformed during biological evolution, or evolve through interactions with the environment provide a new insight into the mechanisms of molecular evolution, as well as organismal evolution, in terms of sequences of quantum automata. Phenotypic changes are expressed only when certain environmentally-induced quantum-molecular changes are coupled with an internal re-structuring of major submodules of the genome and Interactome networks related to cell cycling and cell growth. Contrary to the commonly held view of `standard’ Darwinist models of evolution, the evolution of organisms and species occurs through coupled multi-molecular transformations induced not only by the environment but actually realized through internal re-organizations of genome and interactome networks. The biological, evolutionary processes involve certain epigenetic transformations that are responsible for phenotypic expression of the genome and Interactome transformations initiated at the quantum-molecular level. It can thus be said that only quantum genetics can provide correct explanations of evolutionary processes that are initiated at the quantum—multi-molecular levels and propagate to the higher levels of organismal and species evolution. Biological evolution should be therefore regarded as a multi-scale process which is initiated by underlying quantum (coupled) multi-molecular transformations of the genomic and interactomic networks, followed by specific phenotypic transformations at the level of organism and the variable biogroupoids associated with the evolution of species which are essential to the survival of the species. The theoretical framework introduced in this article also paves the way to a Quantitative Biology approach to biological evolution at the quantum-molecular, as well as at the organismal and species levels. This is quite a substantial modification of the `established’ modern Darwinist, and also of several so-called `molecular evolution’ theories.Professor I.C. Baianu, ibaianu@illinois.edu2012-11-25T12:34:51Z2013-02-18T15:12:36Zhttp://cogprints.org/id/eprint/8729This item is in the repository with the URL: http://cogprints.org/id/eprint/87292012-11-25T12:34:51ZOn Reverse Engineering in the Cognitive and Brain SciencesVarious research initiatives try to utilize the operational principles of organisms and brains to develop alternative, biologically inspired computing paradigms and artificial cognitive systems. This article reviews key features of the standard method applied to complexity in the cognitive and brain sciences, i.e. decompositional analysis or reverse engineering. The indisputable complexity of brain and mind raise the issue of whether they can be understood by applying the standard method. Actually, recent findings in the experimental and theoretical fields, question central assumptions and hypotheses made for reverse engineering. Using the modeling relation as analyzed by Robert Rosen, the scientific analysis method itself is made a subject of discussion. It is concluded that the fundamental assumption of cognitive science, i.e. complex cognitive systems can be analyzed, understood and duplicated by reverse engineering, must be abandoned. Implications for investigations of organisms and behavior as well as for engineering artificial cognitive systems are discussed.Prof. Andreas Schierwagenschierwa@uni-leipzig.de2011-05-02T17:17:47Z2011-05-02T17:17:47Zhttp://cogprints.org/id/eprint/7247This item is in the repository with the URL: http://cogprints.org/id/eprint/72472011-05-02T17:17:47ZQuanta Mathematica Instrumentalis!Quanta mathematica instrumentalis, from Latin, might mean How much mathematics for physical applications. But we try to give this expression another meaning.
We discuss how mathematics and its instrumental nature could serve as paradigm for other human activities and science in general. We introduce notions of higher observer and field of information. We discuss question why we are to study and develop mathematics more diligently than we do in natural way.
Dainis Zepsdainis.zeps@lumii.lv2011-12-16T00:59:10Z2011-12-16T00:59:10Zhttp://cogprints.org/id/eprint/7757This item is in the repository with the URL: http://cogprints.org/id/eprint/77572011-12-16T00:59:10ZFrom Simple to Complex and Ultra-complex Systems:
A Paradigm Shift Towards Non-Abelian Systems DynamicsAtoms, molecules, organisms distinguish layers of reality because of the causal links that govern their behavior, both horizontally (atom-atom, molecule-molecule, organism-organism) and vertically (atom-molecule-organism). This is the first intuition of the theory of levels. Even if the further development of the theory will require imposing a number of qualifications to this initial intuition, the idea of a series of entities organized on different levels of complexity will prove correct. Living systems as well as social systems and the human mind present features remarkably different from those characterizing non-living, simple physical and chemical systems. We propose that super-complexity requires at least four different categorical frameworks, provided by the theories of levels of reality, chronotopoids, (generalized) interactions, and anticipation. Prof.Dr. I.C, BaianuicbProf.Dr. Roberto Poli2010-10-26T18:22:27Z2011-03-11T08:57:46Zhttp://cogprints.org/id/eprint/7066This item is in the repository with the URL: http://cogprints.org/id/eprint/70662010-10-26T18:22:27ZExploring Ancient Architectural Designs with Cellular Automata
The paper discusses the utilization of three-dimensional cellular automata employing the two-dimensional totalistic cellular automata to simulate how simple rules could emerge a highly complex architectural designs of some Indonesian heritages. A detailed discussion is brought to see the simple rules applied in Borobudur Temple, the largest ancient Buddhist temple in the country with very complex detailed designs within. The simulation confirms some previous findings related to measurement of the temple as well as some other ancient buildings in Indonesia. This happens to open further exploitation of the explanatory power presented by cellular automata for complex architectural designs built by civilization not having any supporting sophisticated tools, even standard measurement systems.Hokky Situngkir2010-07-29T01:44:29Z2011-03-11T08:57:38Zhttp://cogprints.org/id/eprint/6898This item is in the repository with the URL: http://cogprints.org/id/eprint/68982010-07-29T01:44:29ZUniqueness, Self belonging and Intercourse in NatureThis manuscript has ensued from my past studies in biochemistry (PhD, CUNY 1986) and my current endeavors in graduate study in philosophy and anthropology. The current research project began during my period as a graduate student in biochemistry with a professor of classical genetics comment that DNA was unique in the physical world. The paradox presented to relate this notion to existing natural law lead me to evolve and communicate a view that the world itself is a special case of a general case that has no relevant physical existence. I also hope to have presented a description of a situation that connects history, human behavior, the process and symbolisms of science, cause and effect to a holism of form, philosophy, mathematics, shape, and motionDr. Marvin/E. Kirshkirsh2152000@yahoo.com2010-06-06T14:33:44Z2011-03-11T08:57:37Zhttp://cogprints.org/id/eprint/6849This item is in the repository with the URL: http://cogprints.org/id/eprint/68492010-06-06T14:33:44ZA Model for the Rehabilitation of Witness Perspective-The Path of Knowledge:The Knowledge of PathThe sound producing machinery of change is a viral element in the problems of
civilization. A silent relation that is unmoved, as it is unexposed to the power of the
discourse of change and a silent logic of volumetric processes, together illustrated to
companion empirical nature are employed for the elaboration of historical conceptual
paradox involving mind and matter. Mind, conceived as an enduring state of the
becoming of energy into a state of matter, and matter as the constantly becoming
environment, are tested with criteria of witnessibility for consistency to capture an
acceptably reasoned description, from a modern perspective, of cultural evolution. A
self-generating friction at the conceptual border of the social and natural sciences, as the
recurring source of the problems of civilization, is discussed. The philosophies of logical
positivism and post modernism referenced from the elaborated philosophy are determined
to reflect a need for a representation of nature that is independent of temporal and
physical parameters of perspective. Emergence is discussed within the framework of first
witness perspective and a visually based mathematical-physical model of space is
elaborated.
Dr. Marvin/E. Kirshkirsh2152000@yahoo.com2010-10-18T11:00:45Z2011-03-11T08:57:45Zhttp://cogprints.org/id/eprint/7048This item is in the repository with the URL: http://cogprints.org/id/eprint/70482010-10-18T11:00:45ZOctologyThe manuscript describes a new sciencific discipline called Octology, which should unify morphogenetic linguistics and neurobiology to investigate the development of the words, cognition and behavior.Dr. Andrej Poleevandrejpoleev@yahoo.com2012-11-09T17:47:35Z2012-11-09T17:47:35Zhttp://cogprints.org/id/eprint/7959This item is in the repository with the URL: http://cogprints.org/id/eprint/79592012-11-09T17:47:35ZA Dialogue on Concepts This short dialogue, in Socratic prose, explores some of the most fundamental constructs in cognition: Concepts, thinking and analogy. In short, concepts are the atoms of thought and analogy is the 'ether' of concept formation. Therefore, thinking is the process of triggering memories through analogy.Professor Ronaldo Vigovigo@ohio.edu2011-10-27T01:31:46Z2011-10-27T01:31:46Zhttp://cogprints.org/id/eprint/7679This item is in the repository with the URL: http://cogprints.org/id/eprint/76792011-10-27T01:31:46ZFrom Cognition to Consciousness:
a discussion about learning, reality representation and decision making.The scientific understanding of cognition and consciousness is currently hampered by the lack of rigorous and universally accepted definitions that permit comparative studies. This paper proposes new functional and un- ambiguous definitions for cognition and consciousness in order to provide clearly defined boundaries within which general theories of cognition and consciousness may be developed. The proposed definitions are built upon the construction and manipulation of reality representation, decision making and learning and are scoped in terms of an underlying logical structure. It is argued that the presentation of reality also necessitates the concept of ab- sence and the capacity to perform transitive inference. Explicit predictions relating to these new definitions, along with possible ways to test them, are also described and discussed.Dr David Guezdavid.guez@mac.com2010-07-29T01:51:58Z2011-03-11T08:57:38Zhttp://cogprints.org/id/eprint/6878This item is in the repository with the URL: http://cogprints.org/id/eprint/68782010-07-29T01:51:58ZKnowledgeThis is an encyclopedia entry and does not include an abstract.Davide MateMaurizio Tirassamaurizio.tirassa@unito.it2011-05-02T17:16:16Z2011-05-02T17:16:16Zhttp://cogprints.org/id/eprint/7267This item is in the repository with the URL: http://cogprints.org/id/eprint/72672011-05-02T17:16:16ZLife is an Adventure! An agent-based reconciliation of narrative and scientific worldviews
The scientific worldview is based on laws, which are supposed to be certain, objective, and independent of time and context. The narrative worldview found in literature, myth and religion, is based on stories, which relate the events experienced by a subject in a particular context with an uncertain outcome. This paper argues that the concept of “agent”, supported by the theories of evolution, cybernetics and complex adaptive systems, allows us to reconcile scientific and narrative perspectives. An agent follows a course of action through its environment with the aim of maximizing its fitness. Navigation along that course combines the strategies of regulation, exploitation and exploration, but needs to cope with often-unforeseen diversions. These can be positive (affordances, opportunities), negative (disturbances, dangers) or neutral (surprises). The resulting sequence of encounters and actions can be conceptualized as an adventure. Thus, the agent appears to play the role of the hero in a tale of challenge and mystery that is very similar to the "monomyth", the basic storyline that underlies all myths and fairy tales according to Campbell [1949]. This narrative dynamics is driven forward in particular by the alternation between prospect (the ability to foresee diversions) and mystery (the possibility of achieving an as yet absent prospect), two aspects of the environment that are particularly attractive to agents. This dynamics generalizes the scientific notion of a deterministic trajectory by introducing a variable “horizon of knowability”: the agent is never fully certain of its further course, but can anticipate depending on its degree of prospect.Francis Heylighenfheyligh@vub.ac.be2010-04-01T19:37:02Z2011-03-11T08:57:35Zhttp://cogprints.org/id/eprint/6765This item is in the repository with the URL: http://cogprints.org/id/eprint/67652010-04-01T19:37:02ZIs Phenomenal Consciousness a Complex Structure?Evolutionary explanations of psychological phenomena have become widespread. This paper examines a recent attempt by Nichols and Grantham (2000) to circumvent the problem of epiphenomenalism in establishing the selective status of consciousness. Nichols and Grantham (2000) argue that a case can be made for the view that consciousness is an adaptation based on its complexity. I set out this argument and argue that it fails to establish that phenomenal consciousness is a complex system. It is suggested that the goal of establishing consciousness as an adaptation may be better served by rejecting the distinction between access consciousness and phenomenal consciousness.Chuck Stiegstie0076@umn.edu2010-06-06T14:33:06Z2011-03-11T08:57:37Zhttp://cogprints.org/id/eprint/6848This item is in the repository with the URL: http://cogprints.org/id/eprint/68482010-06-06T14:33:06ZAnthropology and parallelism:The individual as a universalIt is difficult to define perspective within sets that are self belonging. For example in the study of mankind, anthropology, both men and their studies fall into the same category that contains the topic outline. This situation entails a universal quality of uniqueness, an instance of it, to the topic of anthropology that may be viewed in parallel with the topic of nature as the set of unique particulars. Yet one
might assent to the notion in the inclusive study of man, anthropology, that nothing in its’ content
should conceivably be construed to exceed it, though in approaches to the topic, reference to the topic
of nature, unavoided, refer to the scientific topic of nature in which contemporary notions, when contrasted, exceed the perceptual experience of nature. In this presentation problems in approaches
and in the application of available tools for analysis to the study of man will be discussed. Framed
with respect to a concept of parallelism, notions and stimuli are introduced to augment and reorient
towards a more creative perspective with respect to the organization of first perspective considerations in studies. The theories of relativity, the idea of mathematical relations for simultaneous
events, the presence of artifactual paradoxes as they are reflected in thinking and the scientific tools
applied towards investigations are discussed and hopefully highlighted so that they may hopefully
be perceived distinctly form realities involved in the pursuit of studies.
Dr. Marvin/E. Kirsh2009-07-02T01:51:39Z2011-03-11T08:57:22Zhttp://cogprints.org/id/eprint/6551This item is in the repository with the URL: http://cogprints.org/id/eprint/65512009-07-02T01:51:39ZComplexity, BioComplexity, the Connectionist Conjecture and Ontology of Complexity
This paper develops and integrates major ideas and concepts on complexity and biocomplexity - the connectionist conjecture, universal ontology of complexity, irreducible complexity of totality & inherent randomness, perpetual evolution of information, emergence of criticality and equivalence of symmetry & complexity. This paper introduces the Connectionist Conjecture which states that the one and only representation of Totality is the connectionist one i.e. in terms of nodes and edges. This paper also introduces an idea of Universal Ontology of Complexity and develops concepts in that direction. The paper also develops ideas and concepts on the perpetual evolution of information, irreducibility and computability of totality, all in the context of the Connectionist Conjecture. The paper indicates that the control and communication are the prime functionals that are responsible for the symmetry and complexity of complex phenomenon. The paper takes the stand that the phenomenon of life (including its evolution) is probably the nearest to what we can describe with the term “complexity”. The paper also assumes that signaling and communication within the living world and of the living world with the environment creates the connectionist structure of the biocomplexity. With life and its evolution as the substrate, the paper develops ideas towards the ontology of complexity. The paper introduces new complexity theoretic interpretations of fundamental biomolecular parameters. The paper also develops ideas on the methodology to determine the complexity of “true” complex phenomena.
Mr. Debaprasad Mukherjeebiodeb@gmail.com2009-04-12T22:24:38Z2011-03-11T08:57:21Zhttp://cogprints.org/id/eprint/6417This item is in the repository with the URL: http://cogprints.org/id/eprint/64172009-04-12T22:24:38ZOn Fodor on Darwin on EvolutionJerry Fodor argues that Darwin was wrong about "natural selection" because (1) it is only a tautology rather than a scientific law that can support counterfactuals ("If X had happened, Y would have happened") and because (2) only minds can select. Hence Darwin's analogy with "artificial selection" by animal breeders was misleading and evolutionary explanation is nothing but post-hoc historical narrative. I argue that Darwin was right on all counts. Until Darwin's "tautology," it had been believed that either (a) God had created all organisms as they are, or (b) organisms had always been as they are. Darwin revealed instead that (c) organisms have heritable traits that evolved across time through random variation, with survival and reproduction in (changing) environments determining (mindlessly) which variants were successfully transmitted to the next generation. This not only provided the (true) alternative (c), but also the methodology for investigating which traits had been adaptive, how and why; it also led to the discovery of the genetic mechanism of the encoding, variation and evolution of heritable traits. Fodor also draws erroneous conclusions from the analogy between Darwinian evolution and Skinnerian reinforcement learning. Fodor’s skepticism about both evolution and learning may be motivated by an overgeneralization of Chomsky’s “poverty of the stimulus argument” -- from the origin of Universal Grammar (UG) to the origin of the “concepts” underlying word meaning, which, Fodor thinks, must be “endogenous,” rather than evolved or learned.Stevan Harnadharnad@ecs,soton.ac.uk2009-02-13T01:11:43Z2011-03-11T08:57:19Zhttp://cogprints.org/id/eprint/6356This item is in the repository with the URL: http://cogprints.org/id/eprint/63562009-02-13T01:11:43ZThoughts, Things, and TheoriesWe to critique the following question: can we have reasonable certainty that the terms in speculative or empirical theories correspond meaningfully to things in the ontological structure of the world, or are they only convenient fictions useful for predicting phenomena? We first justify this question as meaningful, and capable of admitting a meaningful answer. We then analyze question itself with examples from physics and biology. We conclude that we can be reasonably certain that the terms in an empirical theory have some degree of ontological significance, provided that they are directly related to phenomenal experiences. We also suggest that the advance of science can be aided through this understanding. Finally we use these conclusions to analyze the existence of the mind and certain physical structures.Mr. Blake Wintermad2physicist@gmail.com2009-01-21T22:43:48Z2011-03-11T08:57:18Zhttp://cogprints.org/id/eprint/6322This item is in the repository with the URL: http://cogprints.org/id/eprint/63222009-01-21T22:43:48ZConversations on the Search for a ‘Physics & Chemistry
– an Alchemy’ of Innovation - Reward Systems.
Bruno Latour in “How to evaluate innovation” develops a fairly simple well argumented procedure based upon the experimental sciences which may prove valuable to all. Latour suggests that the scientific method should be applied not only by scientists but even more so by major decision makers especially politician. Doing one's best and working for the better are some of the the questions discussed in this paper.
Some of Latour's concepts are clarified by translation to simple graphical models. Models for failure MTBF-mean time between failure - are playfully, creatively transformed into models for success.
In spite of the many serious issues discussed,this paper hopefully remains light-hearted in its style and approach. Mr James Alexanderjalex45@gmail.com2009-02-13T01:12:23Z2011-03-11T08:57:19Zhttp://cogprints.org/id/eprint/6352This item is in the repository with the URL: http://cogprints.org/id/eprint/63522009-02-13T01:12:23ZOn Incompatibilist Free WillWe consider the possibility of defining some kind of activity which meets the intuitive requirements of incompatibilist free will. Our analysis of this will be done in a fashion which in some ways parallels the work of Pink on this matter. We will then consider the evidence of such free will, both from an introspective perspective and from a scientific perspective. In the latter we consider neurological and psychological evidence.Mr. Blake Wintermad2physicist@gmail.com2009-12-22T00:40:40Z2011-03-11T08:57:33Zhttp://cogprints.org/id/eprint/6729This item is in the repository with the URL: http://cogprints.org/id/eprint/67292009-12-22T00:40:40ZPutting the Philosophy of Science into Mind: Knowing Minds By ModelsThe philosophy of science can provide fruitful contributions to other areas of philosophy. In this paper, I argue that the application of work on the nature of theories helps to resolve a long-standing dispute on the philosophy of mind over mindreading. The Theory Theory and the Simulation Theory are two competing accounts of how it is that we explain and predict the actions and mental states of others. I discuss each view as well as some of their weaknesses. I suggest that the difficulties each faces depend in part on the notion of theory supposed to be at issue. After introducing an alternative notion of theory, a model-based view, I try to show that the problems of both views are diminished and that a synthesis results.Chuck Stiegstie0076@umn.edu2008-07-13T11:03:39Z2011-03-11T08:57:09Zhttp://cogprints.org/id/eprint/6123This item is in the repository with the URL: http://cogprints.org/id/eprint/61232008-07-13T11:03:39Z“Io sono evoluto e quello è un uomo di Neanderthal”:
Un’analisi linguistica cognitivista del concetto di evoluzione“Evoluzione” è una parola usata ormai frequentemente dall’uomo comune nonché in tutte le discipline, umanistiche e scientifiche. Culturalmente radicata, è diventata una metafora potente. Una definizione corrente è “sviluppo lento e graduale; svolgimento da una forma a un’altra, generalmente più completa e perfetta” (Garzanti). In questi termini non si parla soltanto dell’evoluzione biologica dell’uomo, ma anche dell’evoluzione del linguaggio, della società, della cognizione umana – a prescindere da un’effettiva conoscenza delle teorie evoluzionistiche.
L’evoluzione, in quanto teoria biologica, rimanda quasi automaticamente alla teoria di Darwin, il quale, tuttavia, ha usato il termine solo una volta, nel paragrafo finale del suo celeberrimo L’origine delle specie (1859). Nel concetto di evoluzione è comunemente implicato il passaggio da una specie “primitiva” ad una specie “progredita”, più avanzata o sofisticata e strutturalmente più complessa. Nei suoi scritti, Darwin preferiva parlare di “discendenza con modificazioni” anziché di “evoluzione”, termine usato invece da Bonnet (1762) nella sua teoria dell’homunculus, proprio perché portatore della valenza semantica di “progresso”, non presente nella teoria che Darwin proponeva. Infatti, per quest’ultimo “evoluzione” ha più a che fare con il cambiamento (x --> y) che con il progresso (x --> x+1). L’idea che il concetto di evoluzione abbia a che fare con quello di progresso è in realtà posteriore: nell’accezione più comune del termine è presente l’idea di una temporalità lineare, nella quale l’hic et nunc è visto come la massima compiutezza dello sviluppo, della complessità e della “modernità”, e il passato è visto da un punto di vista situato in un setting storico del presente (antropo-, etno-, euro-, ego-centrico etc), in un’opposizione binaria tra “adesso” e “allora”, tra “noi” e “loro”, tra “progredito” e “primitivo”. Eppure l’evoluzione, in senso stretto, non è teleologica e non c’è un “avanti” o un “indietro”, c’è solo un cambiamento causato dall’adattamento nell’ecosistema in cui l’essere storico si trova. Evoluzione non è necessariamente sinonimo di ottimizzazione (chi può dire che la “prossima generazione” sarà migliore?).
La mia ipotesi è che questa metafora (linguaggio) influenza il nostro modo di concepire e ragionare circa un oggetto (pensiero). Anticipando qualche dato, mi avvalgo delle discipline linguistiche, nell’ambito delle quali si parla dell’evoluzione non solo del linguaggio, ma anche della lingua. Ad esempio l’idea che una lingua sia meno complessa sintatticamente, come nel caso della lingua dei Pirahã del Sudamerica, ha generato il giudizio di “primitivismo” nei confronti del popolo che la parla da parte soprattutto di alcuni filochomskyiani e altri1. In altre scienze sociali, alcune manifestazioni culturali, come l’arte, vengono intese come “primitive” o “moderne”, oppure si parla di evoluzione di generi letterari. La dimostrazione forse più eclatante di questo antropocentrismo riguarda il problema del genere Homo, in cui l’avvento dell’Uomo Anatomicamente Moderno si fa coincidere con la nascita della cultura, utilizzando un doppio standard di modernità, visto che Neandertal fu probabilmente molto più simile a noi di quanto si tende a pensare2.
L’utilizzo dell’idea di evoluzione come metafora può essere estremamente potente nell’ambiente accademico, ma occorre prestare attenzione alle sue possibili implicazioni. Il mio intento è quello di analizzare questa metafora usata comunemente all’interno delle varie discipline dal punto di vista della linguistica cognitiva (frames e metafore concettuali), mettendo in evidenza come il concetto target eredita delle implicazioni che emergono a causa delle qualità proprie del concetto source, per dimostrare che il modo in cui avviene il framing del concetto condiziona sovente la metodologia di studio, nonché la tassonomia applicata all’oggetto studiato. Vito Evolaevola@unipa.it2008-12-17T22:13:26Z2011-03-11T08:57:17Zhttp://cogprints.org/id/eprint/6295This item is in the repository with the URL: http://cogprints.org/id/eprint/62952008-12-17T22:13:26ZDeconstructing Javanese Batik Motif: When Traditional Heritage Meets ComputationThe paper discusses some aspects of Iterated Function System while referring to some interesting point of view into Indonesian traditional batik. The deconstruction is delivered in our recognition of the Collage Theorem to find the affine transform of the iterated function system that attracts the iteration of drawing the dots into the complex motif of – or at least, having high similarity to – batik patterns. We employ and revisit the well-known Chaos Game to reconstruct after having some basic motifs is deconstructed. The reconstruction of the complex pattern opens a quest of creativity broadening the computationally generated batik exploiting its self-similarity properties. A challenge to meet the modern computational generative art with the traditional batik designs is expected to yield synergistically interesting results aesthetically. The paper concludes with two arrows of our further endeavors in this field, be it enriching our understanding of how human cognition has created such beautiful patterns and designs traditionally since ancient civilizations in our anthropological perspective while in the other hand providing us tool to the empowerment of batik as generative aesthetics by employment of computation.
Hokky Situngkirhs@compsoc.bandungfe.net2008-10-16T13:45:55Z2011-03-11T08:57:12Zhttp://cogprints.org/id/eprint/6232This item is in the repository with the URL: http://cogprints.org/id/eprint/62322008-10-16T13:45:55ZExperimental philosophy and the MBI
Various facets of the MBI are discussed, and how it can be used in connection with experimental philosophy, experimental psychology and neuroscience. Brief historical references are given. The large implications of the MBI with regards to McTaggart's paradox and the resolution of the difficulties with quantum mechanics is mentioned. Later sections deal with the mereological fallacy, multiple universes, teletransportation, mind cloning and mind splitting. Dreamwork is chosen as a prime example of the use of the MBI and recent work by Tononi and Baars is referred to.
Dr. John Yatesuvscience@gmail.com2008-09-19T13:58:35Z2011-03-11T08:57:12Zhttp://cogprints.org/id/eprint/6207This item is in the repository with the URL: http://cogprints.org/id/eprint/62072008-09-19T13:58:35ZInformation Technology as an Agent of Post-modernismSociety is in a tumultuous state. Today’s Western society is characterized by disillusionment, doubt, irony, fragmentation and plurality. With the failure of Modernism and the rise to prominence of Nihilism, Post-Humanism, Post-Structuralism and Individualism, society has entered a thoroughly Post-Modern era.
Over the past couple of decades humanity has increasingly turned to Information Technology as the great enabler. Through the capabilities that Information Technology offers, undreamed heights of scientific and technological progress have been reached in an amazingly short span of time. However, rather than uplifting and emancipating society, the wholesale implementation of Information Technology has brought with it a host of unintended and unforeseen consequences. As with the promises of Modernism, Information Technology has not brought society the Utopia that it imagined. Information Technology rather has acted to create a universe characterized by virtuality, constant change, indeterminacy, and an information orientated perspective on the world. Technological progress has not been accompanied by social progress.
Through a comprehensive literature review and an examination of both Post-Modernism and Information Technology, it is proposed that the influences of Information Technology have acted and continued to act to promote Post-Modernism. These influences amongst others include its displacement of space and time, its promotion of the Information Society, its ability to create digital hyperrealities, its destructive influence on tradition and culture, and most of all its catastrophic/ revolutionary impact on the identity. Through these influences this paper seeks to prove that Information Technology acts an agent of Post-Modernism.
Mr DF NelProf JH Kroezejan.kroeze@up.ac.za2008-12-17T22:12:50Z2011-03-11T08:57:17Zhttp://cogprints.org/id/eprint/6298This item is in the repository with the URL: http://cogprints.org/id/eprint/62982008-12-17T22:12:50ZEmergent Innovation—a Socio-Epistemological Innovation Technology. Creating Profound Change and Radically New Knowledge as Core Challenges in Knowledge ManagementThis paper introduces an alternative approach to innovation: Emergent Innovation. As opposed to radical innovation Emergent Innovation finds a balance and integrates the demand both for radically new knowledge and at the same time for an organic development from within the organization. From a knowledge management perspective one can boil down this problem to the question of how to cope with the new and with profound change in knowledge. This question will be dealt with in the first part of the paper. As an implication the alternative approach of Emergent Innovation will be presented in the second part: this approach looks at innovation as a socio-epistemological process of “learning from the future”.
Keywords:
Innovation, radical innovation, emergent innovation, knowledge creation, change.Markus F. PeschlFranz-Markus.Peschl@univie.ac.atThomas Fundneidertf@tfc.at2008-08-24T10:57:41Z2011-03-11T08:57:09Zhttp://cogprints.org/id/eprint/6113This item is in the repository with the URL: http://cogprints.org/id/eprint/61132008-08-24T10:57:41ZLa metafora come carrefour cognitivo del pensiero e del linguaggioNell’ultimo trentennio, le scienze cognitive hanno proposto una teoria alternativa a quelle che intendevano la metafora come strumento linguistico, cioè che il processo metaforico si potesse ridurre al livello letterale, semantico o pragmatico. Secondo la teoria della metafora concettuale, la metafora è un modo di rappresentare ed organizzare il nostro mondo, piuttosto che uno strumento semplicemente decorativo del linguaggio avente un ruolo puramente comunicativo. Questo shift paradigmatico ha influenzato anche altri aspetti delle scienze cognitive. In questo contributo si vuole delineare lo stato attuale della teoria esposta da Lakoff e Johnson e la maturazione del pensiero rispetto alla prima pubblicazione di Metaphors We Live By (1980/1998). Dopo avere illustrato i principi teorici, si daranno degli esempi di metafore culturali e multimodali e si puntualizzerà il ruolo analogo, ma distinto, alla metafora che la metonimia copre nell’ambito dei nostri sistemi concettuali.
Vito Evolaevola@unipa.it2008-08-30T23:20:48Z2011-03-11T08:57:10Zhttp://cogprints.org/id/eprint/6176This item is in the repository with the URL: http://cogprints.org/id/eprint/61762008-08-30T23:20:48ZCategory theory applied to a radically new but logically essential description of time and spaceMcTaggart's ideas on the unreality of time as expressed in "The Nature of Existence" have retained great interest for many years for scholars, academics and other philosophers. In this essay, there is a brief discussion which mentions some of the high points of this philosophical interest, and goes on to apply his ideas to modern physics and neuroscience. It does not discuss McTaggart's C and D series, but does emphasise how the use of derived versions of both his A and B series can be of great virtue in discussing both the abstract physics of time, and the present and future importance of McTaggart's ideas to the subject of time. Indeed an experiment using human volunteers and dynamic systems modelling which was carried out is described, which illustrates this fact. The Many Bubble Interpretation, which also derives from McTaggart's ideas, is discussed and various examples of its use and effectiveness are referred to. The Schrodinger Cat paradox is essentially resolved in principle, the quantum Zeno effect interpretable, Kwiat's recent result referred to, and the newly discovered reverse Stickgold effect described.
Dr. John Yatesuvcorr@gmail.com2009-04-21T02:38:44Z2011-03-11T08:57:21Zhttp://cogprints.org/id/eprint/6421This item is in the repository with the URL: http://cogprints.org/id/eprint/64212009-04-21T02:38:44ZThe Intentionality of Plover Cognitive StatesThis paper attempts to clarify and justify the attribution of mental states to animals by focusing on two different conceptions of intentionality: instrumentalist and realist. I use each of these general views to interpret and discuss the behavior and cognitive states of piping plovers in order to provide a substantive way to frame the question of animal minds. I argue that attributing mental states to plovers is warranted for instrumentalists insofar as it is warranted for similar human behavior. For realists about intentionality, the complexity, adaptability and flexibility of the plovers’ behavior, along with its ability to utilize the content of its representations and to satisfy the conditions of concept attribution, justifies attributing intentionality to plovers. Getting clearer on what is meant by animal minds, provides a better idea of what to look for in animal behavior. In many respects, investigating such phenomena is similar to investigations in other sciences.Chuck Stiegstie0076@umn.edu2008-07-15T09:54:12Z2011-03-11T08:57:08Zhttp://cogprints.org/id/eprint/6094This item is in the repository with the URL: http://cogprints.org/id/eprint/60942008-07-15T09:54:12ZWhat is a worldview?The first part of this paper proposes a precise definition of what a worldview is, and why there is a necessity to have one. The second part suggests how to construct integrated scientific worldviews. For this attempt, three general scientific approaches are proposed: the general systems theory as the endeavor for a universal language for science, a general problem-solving approach and the idea of evolution, broadly construed. We close with some remarks about limitations of scientific worldviews.Clément Vidal2008-08-24T10:57:20Z2011-03-11T08:57:10Zhttp://cogprints.org/id/eprint/6172This item is in the repository with the URL: http://cogprints.org/id/eprint/61722008-08-24T10:57:20ZNeuroethical Considerations Regarding Transcranial Magnetic StimulationAlong with advances in brain technologies comes the ability to enhance the cognitive and affective states of normal people. In this essay, I examine a relatively young technology used in cognitive neuroscience called transcranial magnetic stimulation (TMS). I explain what it is, how it works and what some of its applications are. I suggest that a potential source of reservation one might have regarding brain-altering enhancement is the threat it seemingly poses to the subjective importance of mental states. I then consider the possibility of its being used as an enhancement device and question the authenticity of abilities of individuals that are enhanced by use of TMS. I conclude that judgments regarding the appropriateness of such neurocognitive enhancements should be considered on a case by case basis.Chuck Stiegstie0076@umn.edu2009-02-13T01:12:55Z2011-03-11T08:57:18Zhttp://cogprints.org/id/eprint/6345This item is in the repository with the URL: http://cogprints.org/id/eprint/63452009-02-13T01:12:55ZAnxiety and Posttraumatic Stress Disorder in the Context of Human Brain Evolution:A Role for Theory in DSM-V?The “hypervigilance, escape, struggle, tonic immobility”
evolutionarily hardwired acute peritraumatic response
sequence is important for clinicians to understand. Our
commentary supplements the useful article on human
tonic immobility (TI) by Marx, Forsyth, Gallup, Fusé and Lexington (2008). A hallmark sign of TI is peritraumatic
tachycardia, which others have documented as a
major risk factor for subsequent posttraumatic stress
disorder (PTSD). TI is evolutionarily highly conserved
(uniform across species) and underscores the need for
DSM-V planners to consider the inclusion of evolution
theory in the reconceptualization of anxiety and PTSD.
We discuss the relevance of evolution theory to the
DSM-V reconceptualization of acute dissociativeconversion
symptoms and of epidemic sociogenic disorder(epidemic “hysteria”). Both are especially in need of attention in light of the increasing threat of terrorism
against civilians. We provide other pertinent examples.
Finally, evolution theory is not ideology driven (and
makes testable predictions regarding etiology in “both
directions”). For instance, it predicted the unexpected
finding that some disorders conceptualized in DSM-IV-TR as innate phobias are conditioned responses and thus better conceptualized as mild forms of PTSD. Evolution
theory may offer a conceptual framework in
DSM-V both for treatment and for research on psychopathology.
Dr. H. Stefan Brachah.bracha@va.govDr. Jack D. Maserjmaser@ucsd.edu2008-05-18T01:17:47Z2011-03-11T08:57:07Zhttp://cogprints.org/id/eprint/6077This item is in the repository with the URL: http://cogprints.org/id/eprint/60772008-05-18T01:17:47ZThe computational generative patterns in Indonesian batikThe paper discusses the terminology behind batik crafting and showed the aspects of self-similarity in its ornaments. Even though a product of batik cannot be reduced merely into its decorative properties, it is shown that computation can capture some interesting aspects in the batik-making ornamentation. There are three methods that can be exploited to the generative batik, i.e.: using fractal as the main source of decorative patterns, the hybrid batik that is emerged from the acquisition of L-System Thue-Morse algorithm for the harmonization within the grand designs by using both fractal images and traditional batik patterns, and using the random image tessellation as well as previous tiling algorithms for generating batik designs. The latest can be delivered by using a broad sources of motifs and traditionally recognized graphics. The paper concludes with certain aspects that shows how the harmony of traditional crafting and modern computation could bring us a more creative aspects of the beautiful harmony inherited in the aesthetic aspects of batik crafting.Hokky Situngkir2008-06-13T00:09:09Z2011-03-11T08:57:08Zhttp://cogprints.org/id/eprint/6096This item is in the repository with the URL: http://cogprints.org/id/eprint/60962008-06-13T00:09:09ZAgainst the Tide. A Critical Review by Scientists of How Physics and Astronomy Get DoneNobody should have a monopoly of the truth in this universe. The censorship and suppression of challenging ideas against the tide of mainstream research, the blacklisting of scientists, for instance, is neither the best way to do and filter science, nor to promote progress in the human knowledge. The removal of good and novel ideas from the scientific stage is very detrimental to the pursuit of the truth. There are instances in which a mere unqualified belief can occasionally be converted into a generally accepted scientific theory through the screening action of refereed literature and meetings planned by the scientific organizing committees and through the distribution of funds controlled by "club opinions". It leads to unitary paradigms and unitary thinking not necessarily associated to the unique truth. This is the topic of this book: to critically analyze the problems of the official (and sometimes illicit) mechanisms under which current science (physics and astronomy in particular) is being administered and filtered today, along with the onerous consequences these mechanisms have on all of us.
The authors, all of them professional researchers, reveal a pessimistic view of the miseries of the actual system, while a glimmer of hope remains in the "leitmotiv" claim towards the freedom in doing research and attaining an acceptable level of ethics in science.Martín López CorredoiraCarlos Castro PerelmanJuan Miguel CampanarioBrian MartinWolfgang KundtJ. Marvin HerndonMarian ApostolHalton C. ArpTom Van FlandernAndrei P. KirilyukHenry H. Bauer2009-10-15T22:58:25Z2011-03-11T08:57:27Zhttp://cogprints.org/id/eprint/6638This item is in the repository with the URL: http://cogprints.org/id/eprint/66382009-10-15T22:58:25ZLogical openness in Cognitive Models It is here proposed an analysis of symbolic and sub-symbolic models for studying cognitive processes, centered on emergence and logical openness notions. The Theory of logical openness connects the Physics of system/environment relationships to the system informational structure. In this theory, cognitive models can be ordered according to a hierarchy of complexity depending on their logical openness degree, and their descriptive limits are correlated to Gödel-Turing Theorems on formal systems. The symbolic models with low logical openness describe cognition by means of semantics which fix the system/environment relationship (cognition in vitro), while the sub-symbolic ones with high logical openness tends to seize its evolutive dynamics (cognition in vivo). An observer is defined as a system with high logical openness. In conclusion, the characteristic processes of intrinsic emergence typical of “bio-logic” - emerging of new codes-require an alternative model to Turing-computation, the natural or bio-morphic computation, whose essential features we are going here to outline.Prof. Ignazio Licataignazio.licata@ejtp.info2008-06-27T01:40:29Z2011-03-11T08:57:08Zhttp://cogprints.org/id/eprint/6103This item is in the repository with the URL: http://cogprints.org/id/eprint/61032008-06-27T01:40:29ZReflexive MonismReflexive monism is, in essence, an ancient view of how consciousness relates to the material world that has, in recent decades, been resurrected in modern form. In this paper I discuss how some of its basic features differ from both dualism and variants of physicalist and functionalist reductionism, focusing on those aspects of the theory that challenge deeply rooted presuppositions in current Western thought. I pay particular attention to the ontological status and seeming “out-thereness” of the phenomenal world and to how the “phenomenal world” relates to the “physical world”, the “world itself”, and processing in the brain. In order to place the theory within the context of current thought and debate, I address questions that have been raised about reflexive monism in recent commentaries and also evaluate competing accounts of the same issues offered by “transparency theory” and by “biological naturalism”. I argue that, of the competing views on offer, reflexive monism most closely follows the contours of ordinary experience, the findings of science, and common sense.Prof Max Velmansm.velmans@gold.ac.uk2007-07-28Z2011-03-11T08:56:55Zhttp://cogprints.org/id/eprint/5623This item is in the repository with the URL: http://cogprints.org/id/eprint/56232007-07-28ZDecision-Making: A Neuroeconomic PerspectiveThis article introduces and discusses from a philosophical point of view the nascent field of neuroeconomics, which is the study of neural mechanisms involved in decision-making and their economic significance. Following a survey of the ways in which decision-making is usually construed in philosophy, economics and psychology, I review many important findings in neuroeconomics to show that they suggest a revised picture of decision-making and ourselves as choosing agents. Finally, I outline a neuroeconomic account of irrationality. Benoit Hardy-Vallee2007-07-28Z2011-03-11T08:56:53Zhttp://cogprints.org/id/eprint/5611This item is in the repository with the URL: http://cogprints.org/id/eprint/56112007-07-28ZIncompleteness and the Romance with Science It is argued that qualities of complete/incomplete science theory do not relate to the fertility or
diversity or validity of science theory, but correspond with social, behavioral, moral values, and
trespass into the realms of innate knowing, absolutes, cognition and behavior. It is suggested that
terms employed such as "innately incomplete" are redundant in description- i.e.- "flats are innately
flat"-a curved dwelling would not be suitable for habitation, it is similarly very difficult to find other
words to speak of the notion of science as incomplete.Dr. Marvin Kirsh2007-06-07Z2011-03-11T08:56:52Zhttp://cogprints.org/id/eprint/5589This item is in the repository with the URL: http://cogprints.org/id/eprint/55892007-06-07ZThe Last Scientific RevolutionCritically growing problems of fundamental science organisation and content are analysed with examples from physics and emerging interdisciplinary fields. Their origin is specified and new science structure (organisation and content) is proposed as a unified solution.Andrei Kirilyuk2007-05-08Z2011-03-11T08:56:50Zhttp://cogprints.org/id/eprint/5499This item is in the repository with the URL: http://cogprints.org/id/eprint/54992007-05-08ZSCIENCE, LANGUAGE AND ONTOLOGY: A BIRD IN THE HAND IS WORTH TWO IN THE BUSH A comment on: Language, logic and ontology: uncovering the structure of commonsense knowledge.Dr. Marvin Kirsh2007-04-26Z2011-03-11T08:56:50Zhttp://cogprints.org/id/eprint/5511This item is in the repository with the URL: http://cogprints.org/id/eprint/55112007-04-26ZA Brief Introduction to N-universesI describe in this paper the basic elements of the n-universes, a methodological tool originally introduced in Franceschi (2001) in the context of the study of Goodman's paradox. As the n-universes can be used in wide-ranging applications, such as thought experiments, I describe them from an essentially pragmatic standpoint, i.e. by describing accurately the step-by-step process which leads to a given modelisation.Paul Franceschi2007-08-20Z2011-03-11T08:56:56Zhttp://cogprints.org/id/eprint/5650This item is in the repository with the URL: http://cogprints.org/id/eprint/56502007-08-20ZConcept of disease as an ontological device to shape human existenceDisease and health are conceptual cornerstones of the framework of all medicine. In different times these concepts have had different contents, but in all times their role has also been to express some status of personal human existence and shape its general and particular forms of individual and collective forms through different wanted or avoided scenarios of human behavior. A long term solo of biological reductionism has strengthened the concept of disease as a tool to differentiate normal and abnormal structure and behavior of human beings. Current symbiosis of biological reductionism with the trend of stronger social regulation can rewrite the content of the concept of disease in direction to increase of external determinism of personal human existence. Philosophical analysis of the concept of disease can support coherence in the heterogeneous ideological framework of medicine and help to bind technical side of medicine with appropriate values.
Andres Soosaar2007-03-07Z2011-03-11T08:56:47Zhttp://cogprints.org/id/eprint/5442This item is in the repository with the URL: http://cogprints.org/id/eprint/54422007-03-07ZDesign and Control of Self-organizing SystemsComplex systems are usually difficult to design and control. There are several particular methods for coping with complexity, but there is no general approach to build complex systems. In this thesis I propose a methodology to aid engineers in the design and control of complex systems. This is based on the description of systems as self-organizing. Starting from the agent metaphor, the methodology proposes a conceptual framework and a series of steps to follow to find proper mechanisms that will promote elements to find solutions by actively interacting among themselves. The main premise of the methodology claims that reducing the “friction” of interactions between elements of a system will result in a higher “satisfaction” of the system, i.e. better performance.
A general introduction to complex thinking is given, since designing self-organizing systems requires a non-classical thought, while practical notions of complexity and self-organization are put forward. To illustrate the methodology, I present three case studies. Self-organizing traffic light controllers are proposed and studied with multi-agent simulations, outperforming traditional methods. Methods for improving communication within self-organizing bureaucracies are advanced, introducing a simple computational model to illustrate the benefits of self-organization. In the last case study, requirements for self-organizing artifacts in an ambient intelligence scenario are discussed. Philosophical implications of the conceptual framework are also put forward.Carlos Gershenson2009-06-10T07:58:46Z2011-03-11T08:57:21Zhttp://cogprints.org/id/eprint/6451This item is in the repository with the URL: http://cogprints.org/id/eprint/64512009-06-10T07:58:46ZAN EPISTEMOLOGY FOR THE STUDY OF CONSCIOUSNESS This is a prepublication version of the final chapter from the Blackwell Companion to Consciousness. In it I re-examine the basic conditions required for a study of conscious experiences in the light of progress made in recent years in the field of consciousness studies. I argue that neither dualist nor reductionist assumptions about subjectivity versus objectivity and the privacy of experience versus the public nature of scientific observations allow an adequate understanding of how studies of consciousness actually proceed. The chapter examines the sense in which the experimenter is also a subject, the sense in which all experienced phenomena are private and subjective, the different senses in which a phenomenon can nevertheless be public and observations of it objective, and the conditions for intra-subjective and intersubjective repeatability. The chapter goes on to re-examine the empirical method and how methods used in psychology differ from those used in physics. I argue that a reflexive understanding of these relationships supports a form of “critical phenomenology” that fits consciousness studies smoothly into science. Prof Max Velmans2007-08-29Z2011-03-11T08:56:57Zhttp://cogprints.org/id/eprint/5676This item is in the repository with the URL: http://cogprints.org/id/eprint/56762007-08-29ZMotion This article is about orientation in the conceptual construction and exploration of the world. Orientations that fail to include a satisfactory definition of self as a vital component in ideas of explanation, compulsively leaning towards excessive analytical description(partism)) and resulting in increased numbers of empirically found exceptions to theoretical ideas, also fail to include adequate notions of motion and change. In the science of cognition a three part picture usually results, rather than a two component one in which the extraneous component functions as a compensation from the initial vagueness in ideas. Though this can seem to be a reasonable approach, to proceed from vagueness, to conjecture, empirical test/comparison, a false order in all components of a final theory will continuously result, and ultimately, in one to one correspondence, equate with a separate topic and not with the original. A compulsive and strict adherence to common sense, though not seeming to supply adequate explanation and strained for lingual description/expression, is the only possible route to adequate explanation.
In cognition, the perennial stumbling is always at the division between the ethereal and the tangible. It is such an inhibitory obstacle, that in the construction of ideas, language falters to result in the continual construction of new words to “describe” rather than to connect. Though I believe “describe” is also the real ultimate goal, a real connection is never established.
Dr. Marvin / E. Kirsh2007-10-22T10:41:58Z2011-03-11T08:56:59Zhttp://cogprints.org/id/eprint/5778This item is in the repository with the URL: http://cogprints.org/id/eprint/57782007-10-22T10:41:58ZOn the Role of AI in the Ongoing Paradigm Shift within the Cognitive SciencesThis paper supports the view that the ongoing shift from orthodox to embodied-embedded cognitive science has been significantly influenced by the experimental results generated by AI research. Recently, there has also been a noticeable shift toward enactivism, a paradigm which radicalizes the embodied-embedded approach by placing autonomous agency and lived subjectivity at the heart of cognitive science. Some first steps toward a clarification of the relationship of AI to this further shift are outlined. It is concluded that the success of enactivism in establishing itself as a mainstream cognitive science research program will depend less on progress made in AI research and more on the development of a phenomenological pragmatics.Mr Tom Froeset.froese@gmail.com2008-06-27T01:41:08Z2011-03-11T08:57:08Zhttp://cogprints.org/id/eprint/6109This item is in the repository with the URL: http://cogprints.org/id/eprint/61092008-06-27T01:41:08ZPsychophysical NatureThere are two quite distinct ways in which events that we normally think of as “physical” relate in an intimate way to events that we normally think of as “psychological”. One intimate relation occurs in exteroception at the point where events in the world become events as-perceived. The other intimate relationship occurs at the interface of conscious experience with its neural correlates in the brain. The chapter examines each of these relationships and positions them within a dual-aspect, reflexive model of how consciousness relates to the brain and external world. The chapter goes on to provide grounds for viewing mind and nature as fundamentally psychophysical, and examines similar views as well as differences in previously unpublished writings of Wolfgang Pauli, one of the founders of quantum mechanics.Prof Max Velmans2008-08-10T08:56:23Z2011-03-11T08:57:10Zhttp://cogprints.org/id/eprint/6161This item is in the repository with the URL: http://cogprints.org/id/eprint/61612008-08-10T08:56:23ZTriple-loop learning as foundation for profound change, individual cultivation, and radical innovation. Construction processes beyond scientific and rational knowledge.Purpose: Ernst von Glasersfeld’s question concerning the relationship between scientific/
rational knowledge and the domain of wisdom and how these forms of knowledge come
about is the starting point. This article aims at developing an epistemological as well as
methodological framework that is capable of explaining how profound change can be
brought about in various contexts, such as in individual cultivation, in organizations, in
processes of radical innovation, etc. This framework is based on the triple-loop learning
strategy and the U-theory approach, which opens up a perspective on how the domains of
scientific/rational knowledge, constructivism, and wisdom could grow together more
closely. Design/Structure: This article develops a strategy which is referred to as “tripleloop
learning,” which is not only the basis for processes of profound change, but also brings
about a new dimension in the field of learning and knowledge dynamics: the existential
realm and the domain of wisdom. A concrete approach that puts into practice the tripleloop
learning strategy is presented. The final section shows, how these concepts can be
interpreted in the context of the constructivist approach and how they might offer some
extensions to this paradigm. Findings: The process of learning and change has to be
extended to a domain that concerns existential issues as well as questions of wisdom.
Profound change can only happen if these domains are taken into consideration. The tripleloop
learning strategy offers a model that fulfills this criterion. It is an “epistemo-existential
strategy” for profound change on various levels. Conclusions: The (cognitive) processes
and attitudes of receptivity, suspension, redirecting, openness, deep knowing, as well as
“profound change/innovation from the interior” turn out to be core concepts in this process.
They are compatible with constructivist concepts. Von Glasersfeld’s concept of functional
fitness is carried to an extreme in the suggested approach of profound change and
finds an extension in the existential domain. Key words: Double-loop learning, individual
cultivation, (radical) innovation, knowledge creation, knowledge society, personality development,
presencing, profound change, triple-loop learning, U-theory, wisdom.Markus F. PeschlFranz-Markus.Peschl@univie.ac.at2006-12-03Z2011-03-11T08:56:42Zhttp://cogprints.org/id/eprint/5264This item is in the repository with the URL: http://cogprints.org/id/eprint/52642006-12-03ZWhat can we do with the Research Institute for Social Complexity Sciences in Indonesia?The article discussed about the research opportunities in social complexity studies, especially in Indonesia. This issue is connected to the establishment a social research institute in Indonesia, how to establish and maintain it regarding the interdisciplinary research field. However a lot of localities are taken into the consideration to maintain the social complexity research institute, there would always things that can be learnt by any other similar research institute. Hokky Situngkir2008-05-11T02:47:06Z2011-03-11T08:57:07Zhttp://cogprints.org/id/eprint/6043This item is in the repository with the URL: http://cogprints.org/id/eprint/60432008-05-11T02:47:06ZThe Aleph Zero or Zero DichotomyThis paper proves the existence of a dichotomy which being formally derived from the topological successiveness of w-order leads to the same absurdity of Zeno's Dichotomy II. It also derives a contradictory result from the first Zeno's Dichotomy.Antonio Leon2006-08-11Z2011-03-11T08:56:34Zhttp://cogprints.org/id/eprint/5062This item is in the repository with the URL: http://cogprints.org/id/eprint/50622006-08-11ZHilbert's machine and the Axiom of InfinityHilbert's machine is a supertask machine inspired by Hilbert's Hotel whose functioning leads to a contradiction that compromises the Axiom of Infinity.Antonio Leon2006-07-23Z2011-03-11T08:56:32Zhttp://cogprints.org/id/eprint/5013This item is in the repository with the URL: http://cogprints.org/id/eprint/50132006-07-23ZHuman brain evolution and the "Neuroevolutionary Time-depth Principle:" Implications for the Reclassification of fear-circuitry-related traits in DSM-V and for studying resilience to warzone-related posttraumatic stress disorder.
The DSM-III, DSM-IV, DSM-IV-TR and ICD-10 have judiciously minimized discussion of etiologies to distance clinical psychiatry from Freudian psychoanalysis. With this goal mostly achieved, discussion of etiological factors should be reintroduced into the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-V). A research agenda for the DSM-V advocated the "development of a pathophysiologically based classification system". The author critically reviews the neuroevolutionary literature on stress-induced and fear circuitry disorders and related amygdala-driven, species-atypical fear behaviors of clinical severity in adult humans. Over 30 empirically testable/falsifiable predictions are presented. It is noted that in DSM-IV-TR and ICD-10, the classification of stress and fear circuitry disorders is neither mode-of-acquisition-based nor brain-evolution-based. For example, snake phobia (innate) and dog phobia (overconsolidational) are clustered together. Similarly, research on blood-injection-injury-type-specific phobia clusters two fears different in their innateness: 1) an arguably ontogenetic memory-trace-overconsolidation-based fear (hospital phobia) and 2) a hardwired (innate) fear of the sight of one's blood or a sharp object penetrating one's skin. Genetic architecture-charting of fear-circuitry-related traits has been challenging. Various, non-phenotype-based architectures can serve as targets for research. In this article, the author will propose one such alternative genetic architecture. This article was inspired by the following: A) Nesse's "Smoke-Detector Principle", B) the increasing suspicion that the "smooth" rather than "lumpy" distribution of complex psychiatric phenotypes (including fear-circuitry disorders) may in some cases be accounted for by oligogenic (and not necessarily polygenic) transmission, and C) insights from the initial sequence of the chimpanzee genome and comparison with the human genome by the Chimpanzee Sequencing and Analysis Consortium published in late 2005. Neuroevolutionary insights relevant to fear circuitry symptoms that primarily emerge overconsolidationally (especially Combat related Posttraumatic Stress Disorder) are presented. Also introduced is a human-evolution-based principle for clustering innate fear traits. The "Neuroevolutionary Time-depth Principle" of innate fears proposed in this article may be useful in the development of a neuroevolution-based taxonomic re-clustering of stress-triggered and fear-circuitry disorders in DSM-V. Four broad clusters of evolved fear circuits are proposed based on their time-depths: 1) Mesozoic (mammalian-wide) circuits hardwired by wild-type alleles driven to fixation by Mesozoic selective sweeps; 2) Cenozoic (simian-wide) circuits relevant to many specific phobias; 3) mid Paleolithic and upper Paleolithic (Homo sapiens-specific) circuits (arguably resulting mostly from mate-choice-driven stabilizing selection); 4) Neolithic circuits (arguably mostly related to stabilizing selection driven by gene-culture co-evolution). More importantly, the author presents evolutionary perspectives on warzone-related PTSD, Combat-Stress Reaction, Combat-related Stress, Operational-Stress, and other deployment-stress-induced symptoms. The Neuroevolutionary Time-depth Principle presented in this article may help explain the dissimilar stress-resilience levels following different types of acute threat to survival of oneself or one's progency (aka DSM-III and DSM-V PTSD Criterion-A events). PTSD rates following exposure to lethal inter-group violence (combat, warzone exposure or intentionally caused disasters such as terrorism) are usually 5-10 times higher than rates following large-scale natural disasters such as forest fires, floods, hurricanes, volcanic eruptions, and earthquakes. The author predicts that both intentionally-caused large-scale bioevent-disasters, as well as natural bioevents such as SARS and avian flu pandemics will be an exception and are likely to be followed by PTSD rates approaching those that follow warzone exposure. During bioevents, Amygdala-driven and locus-coeruleus-driven epidemic pseudosomatic symptoms may be an order of magnitude more common than infection-caused cytokine-driven symptoms. Implications for the red cross and FEMA are discussed. It is also argued that hospital phobia as well as dog phobia, bird phobia and bat phobia require re-taxonomization in DSM-V in a new "overconsolidational disorders" category anchored around PTSD. The overconsolidational spectrum category may be conceptualized as straddling the fear circuitry spectrum disorders and the affective spectrum disorders categories, and may be a category for which Pitman's secondary prevention propranolol regimen may be specifically indicated as a "morning after pill" intervention. Predictions are presented regarding obsessive-compulsive disorder (OCD) (e.g., female-pattern hoarding vs. male-pattern hoarding) and "culture-bound" acute anxiety symptoms (taijin-kyofusho, koro, shuk yang, shook yong, suo yang, rok-joo, jinjinia-bemar, karoshi, gwarosa, Voodoo death). Also discussed are insights relevant to pseudoneurological symptoms and to the forthcoming Dissociative-Conversive disorders category in DSM-V, including what the author terms fright-triggered acute pseudo-localized symptoms (i.e., pseudoparalysis, pseudocerebellar imbalance, psychogenic blindness, pseudoseizures, and epidemic sociogenic illness). Speculations based on studies of the human abnormal-spindle-like, microcephaly-associated (ASPM) gene, the microcephaly primary autosomal recessive (MCPH) gene, and the forkhead box p2 (FOXP2) gene are made and incorporated into what is termed "The pre-FOXP2 Hypothesis of Blood-Injection-Injury Phobia." Finally, the author argues for a non-reductionistic fusion of "distal (evolutionary) neurobiology" with clinical "proximal neurobiology," utilizing neurological heuristics. It is noted that the value of re-clustering fear traits based on behavioral ethology, human-phylogenomics-derived endophenotypes and on ontogenomics (gene-environment interactions) can be confirmed or disconfirmed using epidemiological or twin studies and psychiatric genomics.
Dr. H. Stefan Bracha12008-07-15T09:55:43Z2011-03-11T08:57:09Zhttp://cogprints.org/id/eprint/6118This item is in the repository with the URL: http://cogprints.org/id/eprint/61182008-07-15T09:55:43ZSt. Paul's Error: The Semantic Changes of BODY and SOUL in the Western World Historically Christianity owes much to Judaism. St. Paul’s Christianity, however, changed the way of thinking of many of the first Jews because of a new way of reasoning about selfhood, the human body, and human cognition. Without wanting to treat certain theological concepts, I want to underline how modern science’s view of the person is closer to traditional Judaism than it is to Christianity, and how Paul’s “error” was diffused throughout the Western world, by analyzing the semantics of linguistic references to the body, the soul, and emotions.
What was St. Paul’s error? The question means to be both allusive and provocative. He was born by the name Saul in the city of Tarsus, in modern Turkey, during the height of its splendour as a Roman-Greek city. Paul grew up as a “free man”, that is, as a Roman citizen in a cosmopolitan environment. He is considered to be the most influential and productive of the testimonies of the Christian thought throughout Asia Minor and Western Europe. His epistles circulated throughout his time and continue to influence millions of followers, who often interpret his thoughts in contrasting manner, but nonetheless attest to his authority.
An erudite Greek-Roman, persecutor of the first Christians, Paul battled to spread the story of Jesus of Nazareth. His ideology, indeed, is a blend of Greek-Roman thought and of what he learned from the first Christians. The Hellenic characteristics of his faith created a divergence from traditional Judaic thought within what was to become the Christian creed though his influence. As a matter of fact, Christianity came to have a more coherent structure because of Paul, and Christian belief in a way is more Paul’s thought than it is Jesus’.
Jewish teaching circa selfhood was quite holistic. The Hebrew word nephesh is often translated as “soul” but also means “body”, whereas Paul clearly distinguishes the two, talking about a co-existence, “concupiscence” and the necessity of dominating the body to exalt the spirit. I will examine the semantic changes in words dealing with body and soul, and how Paul’s authority eventually influenced the Western world’s way of reasoning about such concepts.Vito Evolaevola@unipa.it2009-05-16T16:32:18Z2011-03-11T08:57:21Zhttp://cogprints.org/id/eprint/6442This item is in the repository with the URL: http://cogprints.org/id/eprint/64422009-05-16T16:32:18ZAproximaciones a la ¿Obra de Arte?
Approaches to ¿ArtWork?Algunas de las preguntas fundamentales de la filosofía del arte son: 1) ¿Qué es una obra de arte?, 2) ¿Qué es Arte?, 3) ¿Qué es el arte? Responderlas es determinar el sentido del arte. Este tipo de preguntas están planteadas bajo la fórmula ¿Qué es X?, es decir, preguntas en las cuales en lo simple esta lo complejo, preguntas en donde lo simple no quiere decir que sean sencillas; son preguntas que traen dentro de si su naturaleza y carácter metafísico-ontológico-gnoseológico, y hasta axiológico. No es el propósito de este trabajo hacer frente o resolver estos problemas, sino proporcionar una aproximación al concepto de “obra de arte” en la época actual y como podría entenderse en relación a ciertos problemas ontológicos y epistemológicos actuales.Paulo Vélez Leónpaulo.velez@ucuenca.edu.ec2007-09-28T23:28:41Z2011-03-11T08:56:58Zhttp://cogprints.org/id/eprint/5723This item is in the repository with the URL: http://cogprints.org/id/eprint/57232007-09-28T23:28:41ZSymmetry and The Creative Cognition An unconquered conceptual divide related to cognitive perception exists between the physical and biological sciences. The life processes of self assembly and replication are unaccounted for in quantum theory or in the ordinary laws of physics. Lying at the very base element of this confusion is a theoretical wall outlined by statistical generalization on one border, and exact historical evolution on the other. Can inert, randomly oriented, statistically described agents (atoms/molecules), direct the reproduction of like things. If the answer to this proposition is negative, then are space and matter not as assumed (i.e. – as uniformly interpretable statistical entities), but things with a life like evolving history from a unique beginning. For example: if life processes are conceptually tree like, can (must) the processes from which they are created be defined this way also? If one reflects on this question he can liken it to a similar question: can a tree exist with one branch only (i.e. can a tree exist as a simple line verses a line with an origin and history) a conflict emerges that reveals a subtler conflict in the pursuit of an objective interpretation. A simple line always is less complex than the other and does not exist in the life processes or even in the ordinary life of an individual: it's history, in terms of life time, is infinitely
smaller the closer it resembles a simple undefined line. In defining matter statistically, we are objectively claiming that it has no time dependant history, and yet is the objective source of evolution, which by definition has a subjective history. We are left with the alternative to find a new order for the definition of physical processes. In this paper, I wish to show that with very little rearrangement of current notions, a model of space can be created that details the replication, from an origin, and propagation in a tree like manner with a declining potential, of both the evolutionary processes of living things, and space, and matter.Dr. Marvin/E. Kirshkirsh2152000@yahoo.com2006-04-21Z2011-03-11T08:56:23Zhttp://cogprints.org/id/eprint/4847This item is in the repository with the URL: http://cogprints.org/id/eprint/48472006-04-21ZComplexity and PhilosophyThe science of complexity is based on a new way of thinking that
stands in sharp contrast to the philosophy underlying Newtonian science, which is
based on reductionism, determinism, and objective knowledge. This paper reviews
the historical development of this new world view, focusing on its philosophical
foundations. Determinism was challenged by quantum mechanics and chaos theory.
Systems theory replaced reductionism by a scientifically based holism. Cybernetics
and postmodern social science showed that knowledge is intrinsically subjective.
These developments are being integrated under the header of “complexity science”.
Its central paradigm is the multi-agent system. Agents are intrinsically subjective
and uncertain about their environment and future, but out of their local interactions,
a global organization emerges. Although different philosophers, and in particular the
postmodernists, have voiced similar ideas, the paradigm of complexity still needs to
be fully assimilated by philosophy. This will throw a new light on old philosophical
issues such as relativism, ethics and the role of the subject.Francis HeylighenPaul CilliersCarlos Gershenson2006-02-27Z2011-03-11T08:56:20Zhttp://cogprints.org/id/eprint/4741This item is in the repository with the URL: http://cogprints.org/id/eprint/47412006-02-27ZHETEROPHENOMENOLOGY VERSUS CRITICAL PHENOMENOLOGYFollowing an on-line dialogue with Dennett (Velmans, 2001) this paper examines the similarities and differences between heterophenomenology (HP) and critical phenomenology (CP), two competing accounts of the way that conscious phenomenology should be, and normally is incorporated into psychology and related sciences. Dennett’s heterophenomenology includes subjective reports of conscious experiences, but according to Dennett, first person conscious phenomenena in the form of “qualia” such as hardness, redness, itchiness etc. have no real existence. Consequently, subjective reports about such qualia should be understood as prescientific attempts to make sense of brain functioning that can be entirely understood in third person terms. I trace the history of this position in behaviourism (Watson, Skinner and Ryle) and early forms of physicalism and functionalism (Armstrong), and summarise some of the difficulties of this view. Critical phenomenology also includes a conventional, third person, scientific investigation of brain and behaviour that includes subjects’ reports of what they experience. CP is also cautious about the accuracy or completeness of subjective reports. However, unlike HP, CP does not assume that subjects are necessarily deluded about their experiences or doubt that these experiences can have real qualities that can, in principle, be described. Such experienced qualities cannot be exhaustively reduced to third-person accounts of brain and behaviour. CP is also reflexive, in it assumes experimenters to have first-person experiences that they can describe much as their subjects do. And crucially, experimenter’s third-person reports of others are based, in the first instance, on their own first-person experiences. CP is commonplace in psychological science, and given that it conforms both to scientific practice and common sense, I argue that there is little to recommend HP other than an attempt to shore up a counterintuitive, reductive philosophy of mind.Prof Max Velmans2006-10-15Z2011-03-11T08:56:39Zhttp://cogprints.org/id/eprint/5217This item is in the repository with the URL: http://cogprints.org/id/eprint/52172006-10-15ZRestricted Complexity, General Complexity
Why has the problematic of complexity appeared so late? And why would it be justified?Edgar Morin2005-11-15Z2011-03-11T08:56:13Zhttp://cogprints.org/id/eprint/4618This item is in the repository with the URL: http://cogprints.org/id/eprint/46182005-11-15ZAn " Instrumentalism to Realism " Hypothesis
It is proposed here that all successful and complete theories always proceed through an intermediate stage of instrumentalism to the final stage of realism. Examples from history of science ( both classical and modern ) in support of this hypothesis are presented.
Dr. Afsar Abbas2005-11-15Z2011-03-11T08:56:13Zhttp://cogprints.org/id/eprint/4617This item is in the repository with the URL: http://cogprints.org/id/eprint/46172005-11-15ZA "LAYERS OF REALITY TO A WEB OF INDUCTION" HYPOTHESISIt is shown that as knowledge is structured, it comes in
modules. This provides different " layers of reality ".
Each layer of reality has its own distinctive inductive logic which may differ from that of the others. All this is woven together to form a " web of induction " in a multidimensional space. It is the overall resilience, firmness and consistent interconnectedness of the whole web which justifies induction globally and which allows science to continue to "read" nature using the inductive logic.
Dr. Afsar Abbas2005-11-15Z2011-03-11T08:56:13Zhttp://cogprints.org/id/eprint/4616This item is in the repository with the URL: http://cogprints.org/id/eprint/46162005-11-15ZMATHEMATICS AS AN EXACT AND PRECISE LANGUAGE OF NATUREOne of the outstanding problems of philosophy of science and
mathematics today is whether there is just "one" unique
mathematics or the same can be bifurcated into "pure" and
"applied" categories. A novel solution for this problem is
offered here. This will allow us to appreciate the manner in which mathematics acts as an exact and precise language of nature. This has significant implications for Artificial Intelligence.
Dr. Afsar Abbas2005-11-15Z2011-03-11T08:56:14Zhttp://cogprints.org/id/eprint/4619This item is in the repository with the URL: http://cogprints.org/id/eprint/46192005-11-15ZA " RIP VAN WINKLE HYPOTHESIS " TO RESOLVE THE
REALISM - ANTIREALISM DEBATE
The intensity of debate between the realists and antirealists
shows no sign of abating. Here a new hypothesis is proposed
to resolve the issue. The requirement of consistency and
continuity are built-in in the methodology of this hypothesis.
This new hypothesis supports realism.
Dr. Afsar Abbas2006-01-06Z2011-03-11T08:56:18Zhttp://cogprints.org/id/eprint/4679This item is in the repository with the URL: http://cogprints.org/id/eprint/46792006-01-06ZWhat is the Relatedness of Mathematics and Art
and why we should care?
There have been a wide range of any human activities concerning the term of “Art and Mathematics”. Regarding directly to the historical root, there are a great deal of discussions on art and mathematics and their connections. The paper elaborates the connection between the two discourses of art and mathematics and how they influence each other.Hokky Situngkir2005-05-14Z2011-03-11T08:56:03Zhttp://cogprints.org/id/eprint/4356This item is in the repository with the URL: http://cogprints.org/id/eprint/43562005-05-14ZOn the Inherent Incompleteness of Scientific TheoriesWe examine the question of whether scientific theories can ever be complete. For two closely related reasons, we will argue that they cannot. The first reason is the inability to determine what are “valid empirical observations”, a result that is based on a self-reference Gödel/Tarski-like proof. The second reason is the existence of “meta-empirical” evidence of the inherent incompleteness of observations. These reasons, along with theoretical incompleteness, are intimately connected to the notion of belief and to theses within the philosophy of science: the Quine-Duhem (and underdetermination) thesis and the observational/theoretical distinction failure. Some puzzling aspects of the philosophical theses will become clearer in light of these connections. Other results that follow are: no absolute measure of the informational content of empirical data, no absolute measure of the entropy of physical systems, and no complete computer simulation of the natural world are possible. The connections with the mathematical theorems of Gödel and Tarski reveal the existence of other connections between scientific and mathematical incompleteness: computational irreducibility, complexity, infinity, arbitrariness and self-reference. Finally, suggestions will be offered of where a more rigorous (or formal) “proof” of scientific incompleteness can be found.Jolly Mathen2005-02-01Z2011-03-11T08:55:49Zhttp://cogprints.org/id/eprint/4045This item is in the repository with the URL: http://cogprints.org/id/eprint/40452005-02-01ZCausality and the Doomsday ArgumentUsing the Autodialer thought experiment, we show that the Self-Sampling Assumption (SSA) is too general, and propose a revision to the assumption that limits its applicability to causally-independent observers. Under the revised assumption, the Doomsday Argument fails, and the paradoxes associated with the standard SSA are dispelled. We also consider the effects of the revised sampling assumption on tests of cosmological theories. There we find that, while we must restrict our attention to universes containing at least one observer, the total number of observers predicted in each universe is irrelevant to the confirmation of a theory.Ivan Phillips2005-08-20Z2011-03-11T08:56:09Zhttp://cogprints.org/id/eprint/4498This item is in the repository with the URL: http://cogprints.org/id/eprint/44982005-08-20ZModels of Cognition: Neurological possibility does not indicate neurological plausibilityMany activities in Cognitive Science involve complex computer models and simulations of both theoretical and real entities. Artificial Intelligence and the study of artificial neural nets in particular, are seen as major contributors in the quest for understanding the human mind. Computational models serve as objects of experimentation, and results from these virtual experiments are tacitly included in the framework of empirical science. Cognitive functions, like learning to speak, or discovering syntactical structures in language, have been modeled and these models are the basis for many claims about human cognitive capacities. Artificial neural nets (ANNs) have had some successes in the field of Artificial Intelligence, but the results from experiments with simple ANNs may have little value in explaining cognitive functions. The problem seems to be in relating cognitive concepts that belong in the `top-down' approach to models grounded in the `bottom-up' connectionist methodology. Merging the two fundamentally different paradigms within a single model can obfuscate what is really modeled. When the tools (simple artificial neural networks) to solve the problems (explaining aspects of higher cognitive functions) are mismatched, models with little value in terms of explaining functions of the human mind are produced. The ability to learn functions from data-points makes ANNs very attractive analytical tools. These tools can be developed into valuable models, if the data is adequate and a meaningful interpretation of the data is possible. The problem is, that with appropriate data and labels that fit the desired level of description, almost any function can be modeled. It is my argument that small networks offer a universal framework for modeling any conceivable cognitive theory, so that neurological possibility can be demonstrated easily with relatively simple models. However, a model demonstrating the possibility of implementation of a cognitive function using a distributed methodology, does not necessarily add support to any claims or assumptions that the cognitive function in question, is neurologically plausible.Peter R. Krebs2007-03-22Z2011-03-11T08:56:48Zhttp://cogprints.org/id/eprint/5461This item is in the repository with the URL: http://cogprints.org/id/eprint/54612007-03-22ZThe NSTP (Non - Spatial Thinking Process) TheoryThe NSTP theory is a (philosophy of mind) semi-idealistic as well as semi-dualistic theory that the material universe, where some peculiar phenomena like quantum non-locality exist in, is exclusively a group of superhuman as well as non-superhuman thinking processes existing in the form of (non-spatial physical/material) feelings (i.e. states of consciousness). In computer terminology, it regards the (material) universe as a non-spatial computer, with hardware of (non-spatial) feelings and software of superhuman as well as non-superhuman thoughts/ideas, including those of space, which is then an illusive/virtual/merely apparent entity. The mere existence of the superhuman thoughts is responsible for the empirical (i.e. a posteriori) order in the non-superhuman ones. The theory, however, accepts the possibility of the reality of space, the space where the phenomena like quantum non-locality do not exist in. The theory is constituted of 6 axioms, 1 theorem, and 3 conjectures. The key strength and novelty in the theory lies in its axiomatic/self-evident foundation, its innovative semi-idealism and semi-dualism, and, in general, its road to idealism and dualism.Kedar Joshi2011-12-16T00:05:03Z2011-12-16T00:05:03Zhttp://cogprints.org/id/eprint/7760This item is in the repository with the URL: http://cogprints.org/id/eprint/77602011-12-16T00:05:03ZThe Role of Physics in Science IntegrationSpecial and General theories of relativity may be considered as the most significant examples of integrative thinking. From these works we see that Albert Einstein attached great importance to how we understand geometry and dimensions. It is shown that physics powered by the new multidimensional elastic geometry is a reliable basis for science integration. Instead of searching for braneworlds (elastic membranes - EM) in higher dimensions we will start by searching them in our 3+1 dimensional world. The cornerstone of the new philosophy is an idea that lower dimensional EMs are an essential component of the living matter, they are responsible for our perceptions, intellect, pattern recognition and high speed signal propagation. According to this theory each EM has both physical and perceptive (psychological) meanings: it exists as our Universe-like physical reality for its inner objects and at the same time it plays perceptive (psychological) role in the external bulk space-time. This philosophy may help us to build up a science which explains not only inanimate, unconscious phenomena, but consciousness as well.Dr. Alexander Egoyanalex21cen@yahoo.com2004-12-15Z2011-03-11T08:55:45Zhttp://cogprints.org/id/eprint/3987This item is in the repository with the URL: http://cogprints.org/id/eprint/39872004-12-15ZOn the Inherent Incompleteness of Scientific TheoriesWe examine the question of whether scientific theories can ever be complete. For two closely related reasons, we will argue that they cannot. The first reason is the inability to determine what are “valid empirical observations”, a result that is based on a self-reference Gödel/Tarski-like proof. The second reason is the existence of “meta-empirical” evidence of the inherent incompleteness of observations. These reasons, along with theoretical incompleteness, are intimately connected to the notion of belief and to theses within the philosophy of science: the Quine-Duhem (and underdetermination) thesis and the observational/theoretical distinction failure. Some puzzling aspects of the philosophical theses will become clearer in light of these connections. Other results that follow are: no absolute measure of the informational content of empirical data, no absolute measure of the entropy of physical systems, and no complete computer simulation of the natural world are possible. The connections with the mathematical theorems of Gödel and Tarski reveal the existence of other connections between scientific and mathematical incompleteness: computational irreducibility, complexity, infinity, arbitrariness and self-reference. Finally, suggestions will be offered of where a more rigorous (or formal) “proof” of scientific incompleteness can be found.Jolly Mathen2005-02-26Z2011-03-11T08:55:51Zhttp://cogprints.org/id/eprint/4109This item is in the repository with the URL: http://cogprints.org/id/eprint/41092005-02-26ZTowards a Model of Life and CognitionWhat should be the ontology of the world such that life and cognition are possible? In this essay, I undertake to outline an alternative ontological foundation which makes biological and cognitive phenomena possible. The foundation is built by defining a model, which is presented in the form of a description of a hypothetical but a logically possible world with a defined ontological base.
Biology rests today on quite a few not so well connected foundations: molecular biology based on the genetic dogma; evolutionary biology based on neo-Darwinian model; ecology based on systems view; developmental biology by morphogenetic models; connectionist models for neurophysiology and cognitive biology; pervasive teleonomic
explanations for the goal-directed behavior across the discipline; etc. Can there be an underlying connecting theme or a model which could make these seemingly disparate domains interconnected? I shall atempt to answer this question.
By following the semantic view of scientific theories, I tend to believe that the models employed by the present physical sciences are not rich enough to capture biological (and some of the non-biological) systems. A richer theory that could capture biological reality could also capture physical and chemical phenomena as limiting cases, but
not vice versa.Nagarjuna G.Nagarjuna G.2006-05-30Z2011-03-11T08:56:26Zhttp://cogprints.org/id/eprint/4895This item is in the repository with the URL: http://cogprints.org/id/eprint/48952006-05-30ZTowards a Model of Life and CognitionThis essay argues for an alternative scientific foundation for accounting complex phenomena like life, cognition and evolution. The approach taken to the problem is neither reductionism, not emergentism (holism), but a third alternative called assimilationism. The analysis based on the alternative foundation indicated some counter intuitive implications like: chemical reactions can happen independent of heat under idealized conditions; all systems, including non-living, counteract perturbations to exist; non-living systems are more open than the living.
Outline: There are abundant building blocks that are systems but not atoms, which perturb each other. The building blocks are heterogenous (have different functional interfaces). There are mainly two kinds interactions: identity preserving (IP) and identity transforming (IT) interactions. Given only IP interactions the system would reach high entropy --- first tendency. Given only IT interactions the system would reach a crystalline state --- second tendency. The actual world is a function of these two tendencies. All beings (living as well as non-living) are open, and their adaptation in an environment is an expression of their invertibility of the two tendencies. Living beings are part of a special dialogically invertible space made by amphipathic agents like water molecules on the one hand and agents with multiple interfaces like biomolecules with possibilities of interacting among their own functional interfaces on the other. This space makes possible for a dialogical opposition of the two tendencies: distribution and collection of energy. Thus, living being is described to be a neither-nor-state, between the two extremes. The characteristic of this space is to maintain the state by replacement, reproduction, recycling or feedback. The abundance of little loops produce highly efficient work cycles, minimizing external energy dependence. A self-reproducing network of such beings manages to engulf a process and a counter process within the network of a being, to counteract the two `deadly' tendencies. A living being is capable of displaying behavioral changes without undergoing change in identity. Thus, living beings are interpreted to be more closed than non-living, for they can neither resist nor repair interactions. And this logic continues to operate recursively to explain physiology, epigenesis, evolution, adaptation, complexity, autonomy and cognition.
The initial cognitive base of a living being is rooted in the invertibility of the perturbations from the environment. It is hypothesized that this repairing process itself becomes the difference, and the processes that are induced in turn within the system generate a differentiation of difference, which is defined as knowledge. However, this knowledge is implicit, and cannot account for conscious cognition, which is explicit.
Nagarjuna G.Nagarjuna G.2004-06-05Z2011-03-11T08:55:37Zhttp://cogprints.org/id/eprint/3667This item is in the repository with the URL: http://cogprints.org/id/eprint/36672004-06-05ZCoincidence, data compression, and Mach’s concept
of “economy of thought”
A case is made that Mach’s principle of “economy of thought”, and therefore usefulness, is related to the compressibility of data, but that a mathematical expression may compress data for reasons that are sometimes coincidental and sometimes not. An expression, therefore, may be sometimes explainable and sometimes not. A method is proposed for distinguishing coincidental data compression from non-coincidental, where this method may serve as a guide in uncovering new mathematical relationships. The method works by producing a probability that a given mathematical expression achieves its compression purely by chance.J. S. Markovitch2005-04-20Z2011-03-11T08:55:59Zhttp://cogprints.org/id/eprint/4262This item is in the repository with the URL: http://cogprints.org/id/eprint/42622005-04-20ZAgainst the inappropriate use of numerical representation in social simulationAll tools have their advantages and disadvantages and for all tools there are times when they are appropriate and times when they are not. Formal tools are no exception to this and systems of numbers are examples of such formal tools. Thus there will be occasions where using a number to represent something is helpful and times where it is not. To use a tool well one needs to understand that tool and, in particular, when it may be inadvisable to use it and what its weaknesses are.
However we are in an age that it obsessed by numbers. Governments spend large amounts of money training its citizens in how to use numbers and their declarative abstractions (graphs, algebra etc.) We are surrounded by numbers every day in: the news, whether forecasts, our speedometers and our bank balance. We are used to using numbers in loose, almost “conversational” ways – as with such concepts as the rate of inflation and our own “IQ”. Numbers have become so famliar that we no more worry about when and why we use them than we do about natural language. We have lost the warning bells in our head that remind us that we may be using numbers inappropriately. They have entered (and sometimes dominate) our language of thought. Computers have exasperbated this trend by making numbers very much easier to store/manipulate/communicate and more seductive by making possible attractive pictures and animations of their patterns. More subtley, when thought of as calculating machines that can play games with us and simulate the detail of physical systems, they suggest that everything comes down to numbers.
For this reason it is second nature for us to use numbers in our social simulations and we frequently do so without considering the consequences of this choice. This paper is simply a reminder about numbers: a call to remember that they are just another (formal) tool; it recaps some of the conditions which indicate when a number is applicable and when it might be misleading; it looks at some of the dangers and pitfalls of using numbers; it considers some examples of the use of numbers; and it points out that we now have some viable alternatives to numbers that are not any less formal but which may be often preferable.Dr Bruce Edmonds2004-02-03Z2011-03-11T08:55:28Zhttp://cogprints.org/id/eprint/3418This item is in the repository with the URL: http://cogprints.org/id/eprint/34182004-02-03ZIdeas are not replicators but minds areAn idea is not a replicator because it does not consist of coded self-assembly instructions. It may retain structure as it passes from one individual to another, but does not replicate it. The cultural replicator is not an idea but an associatively-structured network of them that together form an internal model of the world, or worldview. A worldview is a primitive, uncoded replicator, like the autocatalytic sets of polymers widely believed to be the earliest form of life. Primitive replicators generate self-similar structure, but because the process happens in a piecemeal manner, through bottom-up interactions rather than a top-down code, they replicate with low fidelity, and acquired characteristics are inherited. Just as polymers catalyze reactions that generate other polymers, the retrieval of an item from memory can in turn trigger other items, thus cross-linking memories, ideas, and concepts into an integrated conceptual structure. Worldviews evolve idea by idea, largely through social exchange. An idea participates in the evolution of culture by revealing certain aspects of the worldview that generated it, thereby affecting the worldviews of those exposed to it. If an idea influences seemingly unrelated fields this does not mean that separate cultural lineages are contaminating one another, because it is worldviews, not ideas, that are the basic unit of cultural evolution.Dr. Liane Gabora2005-04-20Z2011-03-11T08:55:59Zhttp://cogprints.org/id/eprint/4263This item is in the repository with the URL: http://cogprints.org/id/eprint/42632005-04-20ZArtificial Science
– a simulation test-bed for studying the social processes of scienceit is likely that there are many different social processes occurring in different parts of science and at different times, and that these processes will impact upon the nature, quality and quantity of the knowledge that is produced in a multitude of ways and to different extents. It seems clear to me that sometimes the social processes act to increase the reliability of knowledge (such as when there is a tradition of independently reproducing experiments) but sometimes does the opposite (when a closed clique act to perpetuate itself by reducing opportunity for criticism). Simulation can perform a valuable role here by providing and refining possible linkages between the kinds of social processes and its results in terms of knowledge. Earlier simulations of this sort include Gilbert et al. in [10]. The simulation described herein aims to progress this work with a more structural and descriptive approach, that relates what is done by individuals and journals and what collectively results in terms of the overall process.Bruce Edmonds2004-11-13Z2011-03-11T08:55:43Zhttp://cogprints.org/id/eprint/3935This item is in the repository with the URL: http://cogprints.org/id/eprint/39352004-11-13ZBiological Kinds and the Causal Theory of ReferenceThis paper uses an example from biology, the homology concept, to argue that current versions of the causal theory of reference give an incomplete account of reference determination. It is suggested that in addition to samples and stereotypical properties, the scientific use of concepts and the epistemic interests pursued with concepts are important factors in determining the reference of natural kind terms.Ingo Brigandt48652004-04-28Z2011-03-11T08:55:31Zhttp://cogprints.org/id/eprint/3576This item is in the repository with the URL: http://cogprints.org/id/eprint/35762004-04-28ZConceptual Role Semantics, the Theory Theory, and Conceptual ChangeThe purpose of the paper is twofold. I first outline a philosophical theory of concepts based on conceptual role semantics. This approach is explicitly intended as a framework for the study and explanation of conceptual change in science. Then I point to the close similarities between this philosophical framework and the theory theory of concepts, suggesting that a convergence between psychological and philosophical approaches to concepts is possible. An underlying theme is to stress that using a non-atomist account of concepts is crucial for the successful study of conceptual development and change—both for the explanation of individual cognitive development and for the study of conceptual change in science.Ingo Brigandt2004-11-13Z2011-03-11T08:55:43Zhttp://cogprints.org/id/eprint/3934This item is in the repository with the URL: http://cogprints.org/id/eprint/39342004-11-13ZHolism, Concept Individuation, and Conceptual ChangeThe paper discusses concept individuation in the context of scientific concepts and conceptual change in science. It is argued that some concepts can be individuated in different ways. A particular term may be viewed as corresponding to a single concept (which is ascribed to every person from a whole scientific field). But at the same time, we can legitimately individuate in a more fine grained manner, i.e., this term can also be considered as corresponding to two or several concepts (so that each of these concepts is attributed to a smaller group of persons only). The reason is that there are different philosophical and explanatory interests that underlie a particular study of the change of a scientific term. These interests determine how a concept is to be individuated; and as the same term can be subject to different philosophical studies and interests, its content can be individuated in different ways.Ingo Brigandt48652004-03-04Z2011-03-11T08:55:28Zhttp://cogprints.org/id/eprint/3439This item is in the repository with the URL: http://cogprints.org/id/eprint/34392004-03-04ZHow can we think the complex?In this chapter we want to provide philosophical tools for understanding and reasoning about complex systems. Classical thinking, which is taught at most schools and universities, has several problems for coping with complexity. We review classical thinking and its drawbacks when dealing with complexity, for then presenting ways of thinking which allow the better understanding of complex systems. Examples illustrate the ideas presented. This chapter does not deal with specific tools and techniques for managing complex systems, but we try to bring forth ideas that facilitate the thinking and speaking about complex systems.
Carlos GershensonFrancis Heylighen2008-08-24T10:56:57Z2011-03-11T08:57:10Zhttp://cogprints.org/id/eprint/6174This item is in the repository with the URL: http://cogprints.org/id/eprint/61742008-08-24T10:56:57ZMental Representations: the New Sense-Data?The notion of representation has become ubiquitous throughout cognitive psychology, cognitive neuroscience and the cognitive sciences generally. This paper addresses the status of mental representations as entities that have been posited to explain cognition. I do so by examining similarities between mental representations and sense-data in both their characteristics and key arguments offered for each. I hope to show that more caution in the adoption and use of representations in explaining cognition is warranted. Moreover, by paying attention to problematic notions of representations, a less problematic sense of representation might emerge. Chuck Stiegstie0076@umn.edu2004-07-30Z2011-03-11T08:55:38Zhttp://cogprints.org/id/eprint/3728This item is in the repository with the URL: http://cogprints.org/id/eprint/37282004-07-30ZReview of Bennett and Hacker, Philosophical Foundations of Neuroscience
This review tries to give overview of key issues of Bennett and Hacker's neurophilosophy and theoretical physiology attempt and to estimate possibilities of that neurophilosophy to be accepted by real neuro and life sciences.Andres Soosaar2004-07-13Z2011-03-11T08:55:38Zhttp://cogprints.org/id/eprint/3719This item is in the repository with the URL: http://cogprints.org/id/eprint/37192004-07-13ZSiren call of Metaphor: subverting the proper task of System NeuroscienceUnder the assumption that nervous systems form a distinct category among the objects in Nature, applying metaphors of psychological and behavioral science disciplines is flawed and invites confusion. Moreover, such practices obscure and detract from the primary task of Neurophysiology: to investigate the intrinsic properties of nervous systems, uncontaminated with concepts borrowed from other disciplines. A comprehensive fundamental theory of nervous systems is expected to have the character of high dimensional nonlinear systems in which state space transitions, set in motion by external influences, self-organize to dynamic state space configuration with consequences for behaviorM.D. Gerhard Werner2006-07-01Z2011-03-11T08:56:28Zhttp://cogprints.org/id/eprint/4949This item is in the repository with the URL: http://cogprints.org/id/eprint/49492006-07-01ZSmoke without Fire: What do virtual experiments in Cognitive Science really tell us?Many activities in Cognitive Science involve complex computer models and simulations of both theoretical and real entities. Artificial Intelligence and the study of artificial neural nets in particular, are seen as major contributors in the quest for understanding the human mind. Computational models serve as objects of experimentation, and results from these virtual experiments are tacitly included in the framework of empirical science. Simulations of cognitive functions, like learning to speak, or discovering syntactical structures in language, are the basis for many claims about human capacities in language acquisition. This raises the question whether results obtained from experiments that are essentially performed on data structures are equivalent to results from "real" experiments. This paper examines some design methodologies for models of cognitive functions using artificial neural nets. The process of conducting the cognitive simulations is largely a projection of theories, or even unsubstantiated conjectures, onto simulated neural structures and an interpretation of the experimental results in terms of the human brain. The problem with this process is that results from virtual experiments are taken to refer unambiguously to the human brain; and the more the language of human cognitive function is deployed in both theory construction and (virtual) experimental interpretation, the more convincing the impression. Additionally, the complexity of the methodologies, principles, and visualization techniques, in the implementation of the computational model, masks the lack of actual similarities between model and real world phenomena. Some computational models involving artificial neural nets have had some success, even commercially, but there are indications that the results from virtual experiments have little value in explaining cognitive functions. The problem seems to be in relating computational, or mathematical, entities to real world objects, like neurons and brains. I argue that the role of Artificial Intelligence as a contributor to the knowledge base of Cognitive Science is diminished as a consequence.Mr Peter R. Krebs2005-01-13Z2011-03-11T08:55:47Zhttp://cogprints.org/id/eprint/3994This item is in the repository with the URL: http://cogprints.org/id/eprint/39942005-01-13ZTwo Accounts of Moral Diversity: The Cognitive Science of Pluralism and AbsolutismAdvances in cognitive science are relevant to the debate between moral pluralism and absolutism. Parametric structure, which plausibly underlies syntax, gives some idea of how pluralism might be true. The cognitive mechanisms underlying mathematical intelligence give some idea of how far absolutism is right. Advances in cognitive science should help us better understand the extent to which we are divided
and how far we are potentially harmonious in our valuesJohn Bolender2006-08-01Z2011-03-11T08:55:43Zhttp://cogprints.org/id/eprint/3902This item is in the repository with the URL: http://cogprints.org/id/eprint/39022006-08-01ZHomi Jehangir Bhabha as a Knowledge Generating System: A Longitudinal Cognition StudyQuantitative analysis of the events of synchronous references in the research papers followed throughout the publishing career of an individual scientist revealed interesting highlights on the knowledge-generating-system. In the case study of Homi Jehangir Bhabha first quinquennium and fifth quinquennium of his research career had low Self-references; third quinquennium and fourth quinquennium had moderate Self-references; whereas second quinquennium had highest Self-references. The two major clusters of Self-references occurring during the second and third quinquennium were indicators of active periods of knowledgegenerating and faster communications.T. SwarnaV.L. Kalyanevkalyane@yahoo.comE.R. PrakasanVijai Kumar2003-06-02Z2011-03-11T08:55:17Zhttp://cogprints.org/id/eprint/2990This item is in the repository with the URL: http://cogprints.org/id/eprint/29902003-06-02ZA Third Route to the Doomsday ArgumentIn this paper, I present a solution to the Doomsday argument based on a third type of solution, by contrast to on the one hand, the Carter-Leslie view and on the other hand, the Eckhardt et al. analysis. The present line of thought is based on the fact that both aforementioned analyses are based on an inaccurate analogy. After discussing the imperfections of both models, I present then a two-sided model that fits more adequately with the human situation corresponding to DA and encapsulates both Carter-Leslie's and Eckhardt et al.'s models. I argue then that this new analogy also holds when one takes into account the issue of indeterminism and the reference class problem. This leads finally to a novel formulation of the argument that could well be more consensual than the original one.Paul Franceschi2004-05-06Z2011-03-11T08:55:30Zhttp://cogprints.org/id/eprint/3519This item is in the repository with the URL: http://cogprints.org/id/eprint/35192004-05-06ZEMERGING THE EMERGENCE SOCIOLOGY: The Philosophical Framework of Agent-Based Social StudiesThe structuration theory originally provided by Anthony Giddens and the advance improvement of the theory has been trying to solve the dilemma came up in the epistemological aspects of the social sciences and humanity. Social scientists apparently have to choose whether they are too sociological or too psychological. Nonetheless, in the works of the classical sociologist, Emile Durkheim, this thing has been stated long time ago. The usage of some models to construct the bottom-up theories has followed the vast of computational technology. This model is well known as the agent based modeling. This paper is giving a philosophical perspective of the agent-based social sciences, as the sociology to cope the emergent factors coming up in the sociological analysis. The framework is made by using the artificial neural network model to show how the emergent phenomena came from the complex system. Understanding the society has self-organizing (autopoietic) properties, the Kohonen’s self-organizing map is used in the paper. By the simulation examples, it can be seen obviously that the emergent phenomena in social system are seen by the sociologist apart from the qualitative framework on the atomistic sociology. In the end of the paper, it is clear that the emergence sociology is needed for sharpening the sociological analysis in the emergence sociology.Hokky Situngkir2007-07-14Z2011-03-11T08:56:53Zhttp://cogprints.org/id/eprint/5600This item is in the repository with the URL: http://cogprints.org/id/eprint/56002007-07-14ZThe Mind-Body Problem in the Origin of Logical Empiricism:
Herbert Feigl and Psychophysical ParallelismIn the 19th century, "Psychophysical Parallelism" was the most popular solution of the mind-body problem among physiologists, psychologists and philosophers. (This is not to be mixed up with Leibnizian and other cases of "Cartesian" parallelism.) The fate of this non-Cartesian view, as founded by Gustav Theodor Fechner, is reviewed. It is shown that Feigl's "identity theory" eventually goes back to Alois Riehl who promoted a hybrid version of psychophysical parallelism and Kantian mind-body theory which was taken up by Feigl's teacher Moritz Schlick..Michael Heidelberger2003-09-19Z2011-03-11T08:55:20Zhttp://cogprints.org/id/eprint/3153This item is in the repository with the URL: http://cogprints.org/id/eprint/31532003-09-19ZSome meta-theoretical issues relating to statistical inferenceThis paper is a reply to some comments made by Green (2002) on Chow’s (2002) critique of Wilkinson and Task Force's (1999) report on statistical inference. Issues raised are (a) the inappropriateness of accepting methodological prescription on authority, (ii) the vacuity of non-falsifiable theories, (iii) the need to distinguish between experiment and metaexperiment, and (iv) the probability foundation of the nullhypothesis significancetest procedure (NHSTP). This reply is intended to foster a better understanding of research methods in general, and of the role of NHSTP in empirical research in particular.Dr. Siu L Chow2003-08-29Z2011-03-11T08:55:20Zhttp://cogprints.org/id/eprint/3125This item is in the repository with the URL: http://cogprints.org/id/eprint/31252003-08-29ZSubjective Perception of Time and a Progressive Present Moment: The Neurobiological Key to Unlocking ConsciousnessThe conclusion of physics, within both a historical and more recent context, that an objectively progressive time and present moment are derivative notions without actual physical foundation in nature, illustrate that these perceived chronological features originate from subjective conscious experience and the neurobiological processes underlying it. Using this conclusion as a stepping stone, it is posited that the phenomena of an in-built subjective conception of a progressive present moment in time and that of conscious awareness are actually one and the same thing, and as such, are also the outcome of the same neurobiological processes. A possible explanation as to how this might be achieved by the brain through employing the neuronal induced nonconscious cognitive manipulation of a small interval of time is proposed. The CIP phenomenon, elucidated within the context of this study is also then discussed.Peter Lynds2003-10-14Z2011-03-11T08:55:22Zhttp://cogprints.org/id/eprint/3220This item is in the repository with the URL: http://cogprints.org/id/eprint/32202003-10-14ZTemporal PassageThis article explains that time flow is a subjective, mind-dependent phenomenon. The paper describes the nature of the subjective "present" of consciousness, and defines the mechanism that brings about this present's motion from past to future.
The first section of the article demonstrates that existence is a dynamic process and shows that time arises from this process. The second section presents a geometric analysis of the present's motion. The third section contrasts space with time. In the last section, consciousness and time are discussed within the context of Einstein's theory of relativity.
Adhanom Andemicael2002-12-01Z2011-03-11T08:55:06Zhttp://cogprints.org/id/eprint/2629This item is in the repository with the URL: http://cogprints.org/id/eprint/26292002-12-01ZA Third Route to the Doomsday ArgumentIn this paper, I present a solution to the Doomsday argument (DA) based on a third type of solution, by contrast to on the one hand, the Carter-Leslie view and on the other hand, the Eckhardt-Sowers-Sober analysis. I argue that both aforementioned analyses are based on an inaccurate analogy. After discussing the imperfections of both models, I present then a novel model that fits more adequately with the human situation corresponding to DA. This last model also encapsulates both Carter-Leslie's and Eckhardt et al.'s models, and reveals a link with the issue of mind-body dualism. Lastly I argue that this novel analogy, combined with an adequate solution to the reference class problem, leads to a novel formulation of the argument that could well be more consensual than the original one.
Paul Franceschi2002-12-15Z2011-03-11T08:55:07Zhttp://cogprints.org/id/eprint/2660This item is in the repository with the URL: http://cogprints.org/id/eprint/26602002-12-15ZWhat is Real ? Conscious Experience Seen as Basic to Ontology. An Overview\*
The idealist attitude followed in this paper is based on the assumption that only conscious experience
in the Now is real. Conscious experience in the Now is supposed to be known directly or intuitively, it
can not be explained. I think it constitutes t he basis of all ontology. Consciousness is conceived as
the total of conscious experience in the Now, the ontology of consciousness is thus derived directly
from the basis. The ontology of nature is derived more indirectly from the basis. Science is regar ded as
a catalog of selected conscious experiences (observations), acknowledged to be scientific and
structured by means of concepts and theories (also regarded as conscious experiences). Material
objects are regarded as heuristic concepts constructed fr o m the immediate experiences in the Now and
useful for expressing observations within a certain domain with some of their mutual relations. History
is also regarded as a construct from conscious experiences in the Now. Concepts of worlds without an
ego a re seen to be in harmony with immediate egoless experiences. Worlds including spirituality are
conceived as based on immediate spiritual experiences together with other immediate experiences.
Idealist or immaterial philosophies have been criticized for im pl ying solipsism or "solipsism of the
present moment". This critique is countered by emphasizing the importance of intersubjectivity for
science and by introducing the more precise concepts of collective conscious experience and collective
conscious expe rie nce across time. Comprehensive evidence supporting the heuristic value of these
concepts is related.
I conclude that the idealist approach leads to a coherent comprehension of natural science including
mind-brain relations, while the mainstream materi alis t approach entails contradictions.and other
problems for a coherent understanding. The idealist approach and the notion of collective conscious
experience also facilitates cross-cultural studies and the underestanding of intersubjectivity.
K\Dr. Axel RandrupUsername: Axel randrup2003-04-26Z2011-03-11T08:55:16Zhttp://cogprints.org/id/eprint/2910This item is in the repository with the URL: http://cogprints.org/id/eprint/29102003-04-26ZComment on Chow's "Issues in Statistical Inference"Contrary to Chow, Wilkinson's report, though more tentative than it might have been, is a reasoned and valuable contribution to psychological science. For those who are quite familiar with the details of statistical methods, it confirms much of what has been happening in the literature over the past few decades. For those who have not been keeping abreast of new developments on the statistical scene, it alerts them in a gentle way that there have been some important changes since they earned their degrees, and
that they should probably read up on these advances before embarking upon their next research program or teaching their next statistics course. Christopher D. Green2001-11-19Z2011-03-11T08:54:49Zhttp://cogprints.org/id/eprint/1900This item is in the repository with the URL: http://cogprints.org/id/eprint/19002001-11-19ZComplex PhilosophyAs science, knowledge, and ideas evolve and are increased and refined, the branches
of philosophy in charge of describing them should also be increased and refined. In this work
we try to expand some ideas as a response to the recent approach from several sciences to
complex systems. Because of their novelty, some of these ideas might require further
refinement and may seem unfinished, but we need to start with something. Only with their
propagation and feedback from critics they might be improved.
We make a brief introduction to complex systems, for then defining <em>abstraction levels</em>.
Abstraction levels represent simplicities and regularities in nature. We make an ontological
distinction of absolute being and relative being, and then discuss issues on causality,
metaphysics, and determinism.Carlos Gershenson2004-02-03Z2011-03-11T08:54:54Zhttp://cogprints.org/id/eprint/2139This item is in the repository with the URL: http://cogprints.org/id/eprint/21392004-02-03ZContextualizing ConceptsTo cope with problems arising in the description of (1) contextual interactions, and (2) the generation of new states with new properties when quantum entities become entangled, the mathematics of quantum mechanics was developed. Similar problems arise with concepts. We use a generalization of standard quantum mechanics, the mathematical lattice theoretic formalism, to develop a formal description of the contextual manner in which concepts are evoked, used, and combined to generate meaning.Liane GaboraDiederik Aerts2003-02-21Z2011-03-11T08:55:13Zhttp://cogprints.org/id/eprint/2781This item is in the repository with the URL: http://cogprints.org/id/eprint/27812003-02-21ZExperimentation in Psychology--Rationale, Concepts and IssuesAn experiment is made up of two or more data-collection conditons that are identical in all aspects, but one. It owes its design to an inductive principle and its hypothesis to deductive logic. It is the most suited for corroborating explanatory theries , ascertaining functional relationship, or assessing the substantive effectiveness of a manipulation. Also discussed are (a) the three meanings of 'control,' (b) the issue of ecological validity, (c) the distinction between theory-corroboration and agricultural-model experiments, and (d) the distinction among the hypotheses at four levels of abstraction that are implicit in an experiment.
Siu L. Chow2003-04-21Z2011-03-11T08:55:15Zhttp://cogprints.org/id/eprint/2892This item is in the repository with the URL: http://cogprints.org/id/eprint/28922003-04-21ZIssues in Statistical InferenceThe APA Task Force’s treatment of research methods is critically examined. The present defense of the experiment rests on showing that (a) the control group cannot be replaced by the contrast group, (b) experimental psychologists have valid reasons to use non-randomly selected subjects, (c) there is no evidential support for the experimenter expectancy effect, (d) the Task Force had misrepresented the role of inductive and deductive logic, and (e) the validity of experimental data does not require appealing to the effect size or statistical power.Dr. Siu L. Chow2002-12-11Z2011-03-11T08:55:07Zhttp://cogprints.org/id/eprint/2643This item is in the repository with the URL: http://cogprints.org/id/eprint/26432002-12-11ZMethods in Psychological ResearchPsychologists collect empirical data with various methods for different reasons. These diverse methods have their strengths as well as weaknesses. Nonetheless, it is possible to rank them in terms of different critieria. For example, the experimental method is used to obtain the least ambiguous conclusion. Hence, it is the best suited to corroborate conceptual, explanatory hypotheses. The interview method, on the other hand, gives the research participants a kind of emphatic experience that may be important to them. It is for the reason the best method to use in a clinical setting. All non-experimental methods owe their origin to the interview method. Quasi-experiments are suited for answering practical questions when ecological validity is importanSiu L. Chowsiu.chow2003-02-21Z2011-03-11T08:55:13Zhttp://cogprints.org/id/eprint/2782This item is in the repository with the URL: http://cogprints.org/id/eprint/27822003-02-21ZStatistics and Its Role in Psychological ResearchHow one may use descriptive statistics to give a succinct description of research data is first discussed. The probability basis of inferential statistics, namely, the random sampling distribution of the test statistic, is then introduced. The said sampling distribution is used to introduced the null-hypothesis significance-testing procedure (NHSTP). The emphasis on 'procedure' serves to highlight the fact that significance tests are about data, not about the substantive hypothesis. The distinction is made between (a) the statistical alternative hypothesis (H1) and the substantive hypothesis, (b) using NHSTP to test whether or not chance is responsible for the data and using the embeddding conditional propositions to corroborate the theory. NHSTP serves to supply the minor premise for the theory-corroboration procedure. Some conceptual difficulties with effect-size, statistical power and meta-analysis are also discusseSiu L. Chow2003-10-04Z2011-03-11T08:55:03Zhttp://cogprints.org/id/eprint/2508This item is in the repository with the URL: http://cogprints.org/id/eprint/25082003-10-04ZWhy it is important to build robots capable of doing scienceScience, like any other cognitive activity, is grounded in the sensorimotor interaction of our bodies with the environment. Human embodiment thus constrains the class of scientific concepts and theories which are accessible to us. The paper explores the possibility of doing science with artificial cognitive agents, in the framework of an interactivist-constructivist cognitive model of science. Intelligent robots, by virtue of having different sensorimotor capabilities, may overcome the fundamental limitations of human science and provide important technological innovations. Mathematics and nanophysics are prime candidates for being studied by artificial scientists.Razvan V. Florian2001-05-09Z2011-03-11T08:54:38Zhttp://cogprints.org/id/eprint/1492This item is in the repository with the URL: http://cogprints.org/id/eprint/14922001-05-09ZWhy Did We Think We Dreamed in Black and White?In the 1950's, dream researchers commonly thought that dreams were predominantly a black-and-white phenomenon, although both earlier and later treatments of dreaming presume or assert that dreams have color. The first half of the twentieth century saw the rise of black-and-white film media, and it is likely that the emergence of the view that dreams are black-and-white was connected with this change in media technology. If our opinions about basic features of our dreams can change with changes in technology, it seems to follow that our knowledge of the phenomenology of our own dreams is much less secure than we might at first have thought it to be.Eric Schwitzgebel2001-11-19Z2011-03-11T08:54:49Zhttp://cogprints.org/id/eprint/1901This item is in the repository with the URL: http://cogprints.org/id/eprint/19012001-11-19ZComments to NeutrosophyAny system based on axioms is incomplete because the axioms cannot be
proven from the system, just believed. But one system can be less-incomplete than
other. Neutrosophy is less-incomplete than many other systems because it contains
them. But this does not mean that it is finished, and it can always be improved. The
comments presented here are an attempt to make Neutrosophy even less-incomplete.
I argue that less-incomplete ideas are more useful, since we cannot
perceive truth or falsity or indeterminacy independently of a context, and are
therefore relative. Absolute being and relative being are defined. Also the "silly
theorem problem" is posed, and its partial solution described. The issues arising
from the incompleteness of our contexts are presented. We also note the relativity
and dependance of logic to a context. We propose "metacontextuality" as a
paradigm for containing as many contexts as we can, in order to be less-incomplete
and discuss some possible consequences.Carlos Gershenson2001-09-11Z2011-03-11T08:54:47Zhttp://cogprints.org/id/eprint/1795This item is in the repository with the URL: http://cogprints.org/id/eprint/17952001-09-11ZHETEROPHENOMENOGY VERSUS CRITICAL PHENOMENOLOGY: A DIALOGUE WITH DAN DENNETTABSTRACT. The following is an email interchange that took place between Dan Dennett and myself in the period 14th to 28th June, 2001. The discussion tries to clarify some essential features of the "heterophenomenology" developed in his book Consciousness Explained (1996), and how this differs from a form of "critical phenomenology" implicit in my own book Understanding Consciousness (2000), and developed in my edited Investigating Phenomenal Consciousness: new methodologies and maps (2000). The departure point for the discussion is a paper posted on Dan's website that summarises a related debate between Dan, David Chalmers and Alvin Goldman (Dennett, 2001). To make the discussion easier to follow, the multiple embeddings have been removed (restoring the sequence in which the comments were written). I have also corrected a few typos and grammatical errors. However, the text of the emails remains exactly the same.
In Round 1, I suggest that scientific investigations of consciousness are better described as a form of "critical phenomenology" that accepts conscious experiences to be real rather than as a "heterophenomenology" which remains neutral about or denies their existence. Dan replies that I have misunderstood his position - he doesn't deny that conscious experiences exist. Conscious experiences just don't have the first-person phenomenal properties that they are commonly thought to have and, in his view, science remains neutral about the nature of such properties.
In Round 2, I agree with Dan that science initially remains neutral about how to understand the nature of conscious experiences. Nevertheless, the phenomenology of consciousness provides the data that scientists are trying to understand. A better understanding of data does not, in general, make the data disappear. I also ask, "if you remove the phenomena from phenomenal consciousness, in what sense is whatever remains "consciousness"? And, if one removes all the phenomenal content from what one takes consciousness to be, doesn't this amount to a denial of the existence of "consciousness" in any ordinary sense of this term? Dan's reply likens beliefs in phenomenal properties to the belief in evil spirits causing disease. He has no doubts that diseases such as whooping cough and tuberculosis are real, but this doesn't require him to believe in evil spirits. And, what's left, once one removes phenomenal properties, is what a zombie and a so-called conscious person have in common: a given set of functional properties that enable them to carry out the tasks we normally think of as conscious.
In Round 3, I summarise our similarities and differences. We agree that first-person reports are not incorrigible and that third-person information may throw light on how to interpret them. We also agree that first-person reports are reports of "something", although we disagree about the nature of that something. I suggest that Dan is sceptical about first-person reports rather than heterophenomenologically "neutral" (e.g. when he likens belief in phenomenal properties to belief in evil spirits). While we agree that science is likely to deepen our understanding of consciousness, I repeat that, unlike the replacement of old theories by better theories, a deeper understanding of phenomena does not in general replace the phenomena themselves. Rather than third-person data replacing first-person reports, the former are required to make sense of the latter, making their relationship complementary and mutually irreducible. In fact, there are many cases where science takes the reality of first-person phenomenology seriously, for example in the extensive literature on pain and its alleviation. If this can't be squeezed into an exclusively third-person view of science, then we will just have to adjust our view of science - something that a "critical phenomenology" achieves at little cost. At the time of this editing, Dan has not replied.
.
Reference. Dennett, D. (2001) The fantasy of first-person science.
http://ase.tufts.edu/cogstud/pubpage.htm
Max Velmans2002-01-13Z2011-03-11T08:54:52Zhttp://cogprints.org/id/eprint/2031This item is in the repository with the URL: http://cogprints.org/id/eprint/20312002-01-13ZKinship and evolved psychological dispositions: The Mother's Brother controversy reconsidered (To appear in Current Anthropology)The article revisits the old controversy concerning the relation of the mother's brother and sister's son in patrilineal societies in the light both of anthropological criticisms of the very notion of kinship and of evolutionary and epidemiological approaches to culture. It argues that the ritualized patterns of behavior that had been discussed by Radcliffe-Brown, Goody and others are to be explained in terms of the interaction of a variety of factors, some local and historical, others pertaining to general human dispositions. In particular, an evolved disposition to favor relatives can contribute to the development and stabilization of these behaviors, not by directly generating them, but by making them particularly "catchy" and resilient. In this way, it is possible to recognize both that cultural representations and practices are specific to a community at a time in its history (rather than mere tokens of a general type), and that they are, in essential respects, grounded in the common evolved psychology of human beings.Maurice BlochDan Sperber2001-10-04Z2011-03-11T08:54:48Zhttp://cogprints.org/id/eprint/1813This item is in the repository with the URL: http://cogprints.org/id/eprint/18132001-10-04ZA NATURAL ACCOUNT OF PHENOMENAL CONSCIOUSNESS.Physicalists commonly argue that conscious experiences are nothing more than states of the brain, and that conscious qualia are observer-independent, physical properties of the external world. Although this assumes the 'mantle of science,' it routinely ignores the findings of science, for example in sensory physiology, perception, psychophysics, neuropsychology and comparative psychology. Consequently, although physicalism aims to naturalise consciousness, it gives an unnatural account of it. It is possible, however, to develop a natural, nonreductive, reflexive model of how consciousness relates to the brain and the physical world. This paper introduces such a model and how it construes the nature of conscious experience. Within this model the physical world as perceived (the phenomenal world) is viewed as part of conscious experience not apart from it. While in everyday life we treat this phenomenal world as if it is the "physical world", it is really just one biologically useful representation of what the world is like that may differ in many respects from the world described by physics. How the world as perceived relates to the world as described by physics can be investigated by normal science (e.g. through the study of sensory physiology, psychophysics and so on). This model of consciousness appears to be consistent with both third-person evidence of how the brain works and with first-person evidence of what it is like to have a given experience. According to the reflexive model, conscious experiences are really how they seem. Max Velmans2001-03-11Z2011-03-11T08:54:36Zhttp://cogprints.org/id/eprint/1359This item is in the repository with the URL: http://cogprints.org/id/eprint/13592001-03-11ZReview of Don Dedrick, Naming the Rainbow: colour language, colour science, and culture.By spotlighting the irreducible role of cognitive processes between biology and culture, this synthesis and critique of the universalist tradition in colour science offers a genuine starting-point for all future 'serious inquiry
into the relationship between linguistic and non-linguistic aspects of colour classification'. John Sutton2003-04-24Z2011-03-11T08:55:15Zhttp://cogprints.org/id/eprint/2906This item is in the repository with the URL: http://cogprints.org/id/eprint/29062003-04-24ZScientific Models, Connectionist Networks, and Cognitive ScienceThe employment of a particular class of computer programs known as "connectionist networks" to model mental processes is a widespread approach to research in
cognitive science these days. Little has been written, however, on the precise connection that is thought to hold between such programs and actual in vivo cognitive
processes such that the former can be said to "model" the latter in a scientific sense. What is more, this relation can be shown to be problematic. In this paper I give
a brief overview of the use of connectionist models in cognitive science, and then explore some of the statements connectionists have made about the nature of the
"modeling relation" thought to hold between them and cognitive processes. Finally I show that these accounts are inadequate and that more work is necessary if
connectionist networks are to be seriously regarded as scientific models of cognitive processesChristopher D. Green2002-04-12Z2011-03-11T08:54:55Zhttp://cogprints.org/id/eprint/2176This item is in the repository with the URL: http://cogprints.org/id/eprint/21762002-04-12ZA Solution to Goodman's ParadoxEnglish translation of a paper intially publisdhed in French in Dialogue under the title 'Une solution pour le paradoxe de Goodman'. In the classical version of Goodman's paradox, the universe where the problem takes place is ambiguous. The conditions of induction being accurately described, I define then a framework of n-universes, allowing the distinction, among the criteria of a given n-universe, between constants and variables. Within this framework, I distinguish between two versions of the problem, respectively taking place: (i) in an n-universe the variables of which are colour and time; (ii) in an n-universe the variables of which are colour, time and space. Finally, I show that each of these versions admits a specific resolution.Paul Franceschi2000-05-31Z2011-03-11T08:53:43Zhttp://cogprints.org/id/eprint/187This item is in the repository with the URL: http://cogprints.org/id/eprint/1872000-05-31ZClosure, Function, Emergence, Semiosis and Life: The Same Idea? Reflections on the Concrete and the Abstract in Theoretical Biology.In this note some epistemological problems in general theories about living systems are considered; in particular, the question of hidden connections between different areas of experience, such as folk biology and scientific biology, and hidden connections between central concepts of theoretical biology, such as function, semiosis, closure and life.Claus Emmeche2000-06-14Z2011-03-11T08:53:42Zhttp://cogprints.org/id/eprint/149This item is in the repository with the URL: http://cogprints.org/id/eprint/1492000-06-14ZTHE MIND AND BRAIN SCHOLAR AS A HITCH-HIKER IN POST-GUTENBERG GALAXY: PUBLISHING AT 2000 AND BEYONDElectronic journal (e-journal) publishing has started to change the ways we think about publish-ing. However, many scholars and scientists in the mind and brain sciences are still ignorant of the new possibilities and on-going debates. This paper will provide a summary of the issues in-volved, give an update of the current discussion, and supply practical information on issues re-lated to e- journal publishing and self-archiving relevant for the mind and brain sciences. Issues such as differences between traditional and e-journal publishing, open archive initiatives, world-wide conventions, quality control, costs involved in e-journal publishing, and copyright questions will be addressed. Practical hints on how to self-archive, how to submit to the e-journal Psycolo-quy, how to create an open research archive, and where to find information relevant to e-publishing will be supplied.Brigitte StemmerMarianne CorreYves Joanette2001-02-10Z2011-03-11T08:54:32Zhttp://cogprints.org/id/eprint/1300This item is in the repository with the URL: http://cogprints.org/id/eprint/13002001-02-10ZCartwright on Laws and CompositionCartwright attempts to argue from an analysis of the composition of forces, and more generally the composition of laws, to the conclusion that laws must be regarded as false. A response to Cartwright is developed which contends that properly understood composition poses no threat to the truth of laws, even though agreeing with Cartwright that laws do not satisfy the facticity requirement. My analysis draws especially on the work of Creary, Bhaskar, Mill, and points towards a general rejection of Cartwrights view that laws, especially fundamental laws, should be seen as false. David Jon Spurrett2001-08-30Z2011-03-11T08:54:46Zhttp://cogprints.org/id/eprint/1773This item is in the repository with the URL: http://cogprints.org/id/eprint/17732001-08-30ZComplexity and Scientific ModellingThere have been many attempts at formulating measures of complexity of physical processes. Here we reject this direct approach and attribute complexity only to models of these processes in a given language, to reflect its "difficulty". A framework for modelling is outlined which includes the language of modelling, the complexity of models in that language, the error in the model's predictions and the specificity of the model. Many previous formulations of complexity can be seen as either: a special case of this framework; attempts to "objectify" complexity by considering only minimally complex models or its asymptotic behaviour; relativising it to a fixed mathematical structure in the absence of noise; misnamed in that they capture the specificity rather than the complexity. Such a framework makes sense of a number of aspects of scientific modelling. Complexity does not necessarily correspond to a lack of simplicity or lie between order and disorder. When modelling is done by agents with severe resource limitations, the acceptable trade-offs between complexity, error and specificity can determine the effective relations between these. The characterisation of noise will emerge from this. Simpler theories are not a priori more likely to be correct but sometimes preferring the simpler theory at the expense of accuracy can be a useful heuristic.Bruce Edmonds2000-03-04Z2011-03-11T08:53:53Zhttp://cogprints.org/id/eprint/403This item is in the repository with the URL: http://cogprints.org/id/eprint/4032000-03-04ZEVOLUTION, COMMUNICATION AND THE PROPER FUNCTION OF LANGUAGELanguage is both a biological and a cultural phenomenon. Our aim here is to discuss, in an evolutionary perspective, the articulation of these two aspects of language. For this, we draw on the general conceptual framework developed by Ruth Millikan (1984) while at the same time dissociating ourselves from her view of language.Gloria OriggiDan Sperber2000-11-23Z2011-03-11T08:54:25Zhttp://cogprints.org/id/eprint/1076This item is in the repository with the URL: http://cogprints.org/id/eprint/10762000-11-23ZIn Reply [Reply to Commentaries on "How to Solve the Mind-Body Problem"]noneNicholas Humphrey2003-04-15Z2011-03-11T08:55:15Zhttp://cogprints.org/id/eprint/2870This item is in the repository with the URL: http://cogprints.org/id/eprint/28702003-04-15ZIn Reply [Reply to Commentaries on "How to Solve the Mind-Body Problem"]noneNicholas Humphrey1999-08-26Z2011-03-11T08:53:52Zhttp://cogprints.org/id/eprint/391This item is in the repository with the URL: http://cogprints.org/id/eprint/3911999-08-26ZIs Supervenience Asymmetric?After some preliminary clarifications, arguments for the supposed asymmetry of supervenience and determination, such as they are, are shown to be unsound. An argument against the supposed asymmetry is then constructed and defended against objections. This is followed by explanations of why the intuition of asymmetry is nonetheless so entrenched, and of how the asymmetric ontological priority of the physical over the non-physical can be understood without the supposed asymmetry of supervenience and determination.John F. Post2001-12-14Z2011-03-11T08:54:51Zhttp://cogprints.org/id/eprint/1982This item is in the repository with the URL: http://cogprints.org/id/eprint/19822001-12-14ZProblems in the "functional" investigations of consciousnessThis article presents the view that the problem of consciousness per definition can not be seen as a strictly scientific or strictly philosophical problem. The first idea, especially, leads to important difficulties: First of all, the idea has in most cases implied some rather superficial reductionistic or functionalistic a priori assumptions, and, secondly, it can be shown that some of the most commonly used empirical methods in these regards are inadequate. Especially so in the case of contrastive analysis, widely used in cognitive neuroscience. However, this criticism does not lead to the conclusion that scientific methods are inadequate as such, only that they always work on a pre-established background of theory, of which one must be explicit.Morten Overgaard2000-01-28Z2011-03-11T08:54:20Zhttp://cogprints.org/id/eprint/840This item is in the repository with the URL: http://cogprints.org/id/eprint/8402000-01-28ZTowards a Descriptive Model of Agent Strategy SearchIt is argued that due to the complexity of most economic phenomena, the chances of deriving correct models from a priori principles are small. Instead are more descriptive approach to modelling should be pursued. Agent-based modelling is characterised as a step in this direction. However many agent-based models use off-the-shelf algorithms from computer science without regard to their descriptive accuracy. This paper attempts an agent model that describes the behaviour of subjects reported by Joep Sonnemans as accurately as possible. It takes a structure that is compatible with current thinking cognitive science and explores the nature of the agent processes that then match the behaviour of the subjects. This suggests further modelling improvements and experiments.B. Edmonds1999-06-28Z2011-03-11T08:54:02Zhttp://cogprints.org/id/eprint/545This item is in the repository with the URL: http://cogprints.org/id/eprint/5451999-06-28ZChallenging the Computational Metaphor: Implications for How We ThinkThis paper explores the role of the traditional computational metaphor in our thinking as computer scientists, its influence on epistemological styles, and its implications for our understanding of cognition. It proposes to replace the conventional metaphor--a sequence of steps--with the notion of a community of interacting entities, and examines the ramifications of such a shift on these various ways in which we think.Lynn Andrea Stein1999-04-08Z2011-03-11T08:54:02Zhttp://cogprints.org/id/eprint/534This item is in the repository with the URL: http://cogprints.org/id/eprint/5341999-04-08ZThe Cost of Rational AgencyThe rational agency assumption limits systems to domains of application that have never been observed. Moreover, representing agents as being rational in the sense of maximising utility subject to some well specified constraints renders software systems virtually unscalable. These properties of the rational agency assumption are shown to be unnecessary in representations or analogies of markets. The demonstration starts with an analysis of how the rational agency assumption limits the applicability and scalability of the IBM information filetering economy. An unrestricted specification of the information filtering economy is developed from an analysis of the properties of markets as systems and the implementation of a model based on intelligent agents. This extended information filtering economy modelis used to test the analytical results on the scope for agents to act as intermediaries between human users and information sources.Scott Moss2000-08-28Z2011-03-11T08:54:21Zhttp://cogprints.org/id/eprint/898This item is in the repository with the URL: http://cogprints.org/id/eprint/8982000-08-28ZElaboration of the New Paradigm of Interdisciplinary InvestigationsIn the article, the problem of creating a theoretical system for approaching the complex phenomena of Reality is discussed. The idea is expressed that epistemology, as the theory of cognitive process, has a dissociative character. The postulate of an integrated informational system is formulated. Such postulate is a suggested basis for creation of a unified methodology of cognition (investigation) which makes it possible to elaborate a new paradigm of interdisciplinary investigations as a separate scientific discipline which has its own methods and special objects. The article will be of interest to methodologists of science.Serge Patlavskiy2004-07-06Z2011-03-11T08:55:31Zhttp://cogprints.org/id/eprint/3571This item is in the repository with the URL: http://cogprints.org/id/eprint/35712004-07-06ZElaboration of the New Paradigm of Interdisciplinary InvestigationsIn the article, the problem of construction a meta-theory for approaching the complex phenomena of Reality is discussed. The integrated information system is formulated. Such postulate is a suggested basis for creation of a unified methodology of cognition (investigation) which makes it possible to elaborate a new paradigm of interdisciplinary investigations as a separate scientific discipline which has its own methods and special objects. The article will be of interest to philosophers and methodologists of science
Serge Patlavskiy2005-12-05Z2011-03-11T08:56:14Zhttp://cogprints.org/id/eprint/4633This item is in the repository with the URL: http://cogprints.org/id/eprint/46332005-12-05ZElaboration of the New Paradigm of Interdisciplinary InvestigationsIn the article, the problem of construction a meta-theory for approaching the complex phenomena of Reality is discussed. The integrated information system is formulated. Such postulate is a suggested basis for creation of a unified methodology of cognition (investigation) which makes it possible to elaborate a new paradigm of interdisciplinary investigations as a separate scientific discipline which has its own methods and special objects. The article will be of interest to philosophers and methodologists of science
Serge Patlavskiy2002-04-11Z2011-03-11T08:54:55Zhttp://cogprints.org/id/eprint/2172This item is in the repository with the URL: http://cogprints.org/id/eprint/21722002-04-11ZThe Doomsday Argument and Hempel's ProblemEnglish translation of a paper originally pupblished in French in the Canadian Journal of Philosophy under the title 'Comment l'urne de Carter et Leslie se déverse dans celle de Hempel'. In this paper, I present firstly a solution to Hempel's Problem. I recall secondly the solution to the Doomsday Argument described in my previous Une Solution pour l'Argument de l'Apocalypse (Canadian Journal of Philosophy 1998-2) and remark that both solutions are based on a similar line of reasoning. I show thirdly that the Doomsday Argument can be reduced to the core of Hempel's Problem.Paul Franceschi1999-06-27Z2011-03-11T08:53:52Zhttp://cogprints.org/id/eprint/385This item is in the repository with the URL: http://cogprints.org/id/eprint/3851999-06-27ZWhat do mathematicians teach us about the World ? An anthropological perspectiveThe activity of mathematicians is examined here in an anthropological perspective. The task effectively performed reveals that, independently of their own representation, mathematicians produce in actuality a « virtual physics ». The principles of demonstrative proof as described and assessed by Aristotle, are first introduced, displaying a latitude in the demonstrative methodology open to mathematicians, with modes of proof ranging from the compelling to the plausible only. Even such leeway in the matter of proof has been felt at times by mathematicians as an intolerable constraint. The proof by reductio ad absurdum is shown to be by-passable and effectively by-passed by mathematicians. The calculus is examined which Morris Kline characterized both as « the most original and most fruitful concept in all of mathematics » and being plagued by a lack of mathematical rigor. The reason for this is that the world in its very build forced the calculus to be what it became, at times in contradiction with the mathematical code of practice. The mathematician enters the world of mathematics armed with his intuition of how the world at large operates. This he imports within mathematics and designs mathematical objects with an in-built « virtually physical » plausibility. The culture around him is impatient with mathematics which do not find their way to providing models. A double system of constraints, both inner and outer, contribute at making mathematics a « virtual physics ».Paul Jorion2000-03-21Z2011-03-11T08:54:04Zhttp://cogprints.org/id/eprint/557This item is in the repository with the URL: http://cogprints.org/id/eprint/5572000-03-21ZThe adaptationist stance and evolutionary computationIn this paper the connections between the evolutionary paradigm called adaptationism and the field of evolutionary computation (EC) will be outlined. After giving an introduction to adaptationism we will try to show that the so called adaptational stance can be applied in EC as well as in biology and this application may have significant benefits. It will also be shown that this approach has serious, inherent limitations in both cases especially in the case of EC, because we lack the language which could be used to form the theories, but these representational limitations can be handled by devoting efforts to construct this language.Mark Jelasity1999-08-24Z2011-03-11T08:53:52Zhttp://cogprints.org/id/eprint/389This item is in the repository with the URL: http://cogprints.org/id/eprint/3891999-08-24ZBreakwater: The New Wave, Supervenience and IndividualismNew-wave psychoneural reduction, a la Bickle and Churchland, conflicts with the way certain adaptation properties are individuated according to evolutionary biology. Such properties cannot be reduced to physical properties of the token items that have the adaptation properties. The New Wave may entail a form of individualism inconsistent with evolutionary biology. All of this causes serious trouble as well for Jaegwon Kim's thesis of the Causal Individuation of Kinds, his Weak Supervenience thesis, Alexander's Dictum, his synchronicity thesis that all psychological kinds supervene on the contemporaneous physical states of the organism, Correlation Thesis, and indeed his Restricted Correlation Thesis. All these theses are strongly individualist, in the sense of entailing that ALL a thing's properties are determined by its own physical properties and relations, contrary to many properties in biology and psychology.John F. Post2004-01-13Z2011-03-11T08:55:27Zhttp://cogprints.org/id/eprint/3379This item is in the repository with the URL: http://cogprints.org/id/eprint/33792004-01-13ZThe Completeness of PhysicsThe present work is focussed on the completeness of physics, or what is here called the Completeness Thesis: the claim that the domain of the physical is causally closed. Two major questions are tackled: How best is the Completeness Thesis to be formulated? What can be said in defence of the Completeness Thesis? My principal conclusions are that the Completeness Thesis can be coherently formulated, and that the evidence in favour if it significantly outweighs that against it.
In opposition to those who argue that formulation is impossible because no account of what is to count as physical can be provided, I argue that as long as the purpose of the argument in which the account is to be used are borne in mind there are no significant difficulties. The account of the physical which I develop holds as physical whatever is needed to fix the likelihood of pre-theoretically given physical effects, and hypothesises in addition that no chemical, biological or psychological factors will be needed in this way. The thus formulated Completeness Thesis is coherent, and has significant empirical content.
In opposition to those who defend the doctrine of emergentism by means of philosophical arguments I contend that those arguments are flawed, setting up misleading dichotomies between needlessly attenuated alternatives and assuming the truth of what is to be proved. Against those who defend emergentism by appeal to the evidence, I argue that the history of science since the nineteenth century shows clearly that the empirical credentials of the view that the world is causally closed at the level of a small number of purely physical forces and types of energy is stronger than ever, and the credentials of emergentism correspondingly weaker.
In opposition to those who argue that difficulties with reductionism point to the implausibility of the Completeness Thesis I argue that completeness in no way entails the kinds of reductionism which give rise to the difficulties in question. I argue further that the truth of the Completeness Thesis is in fact compatible with a great deal of taxonomic disorder and the impossibility of any general reduction of non-fundamental descriptions to fundamental ones.
In opposition to those who argue that the epistemological credentials of fundamental physical laws are poor, and that those laws should in fact be seen as false, I contend that truth preserving accounts of fundamental laws can be developed. Developing such an account, I test it by considering cases of the composition of forces and causes, where what takes place is different to what is predicted by reference to any single law, and argue that viewing laws as tendencies allows their truth to be preserved, and sense to be made of both the experimental discovery of laws, and the fact that composition enables accurate prediction in at least some cases.
David Spurrett2000-03-23Z2011-03-11T08:53:53Zhttp://cogprints.org/id/eprint/404This item is in the repository with the URL: http://cogprints.org/id/eprint/4042000-03-23ZCONCEPTUAL TOOLS FOR A NATURAL SCIENCE OF SOCIETY AND CULTUREThis is the text of the Radcliffe-Brown Lecture in Social Anthopology 1999 (To appear in the Proceedings of the British Academy). In it, I argue that to approach society and culture in a naturalistic way, the domain of the social sciences must be reconceptualised by recognising only entities and processes of which we have a naturalistic understanding. These are mental representations and public productions, the processes that causally link them, the causal chains that bond these links, and the complex webs of such causal chains that criss-cross human populations over time and space. Such causal chains may distribute and stabilise representations and productions throughout a human population, thereby generating culture. The lecture introduces several conceptual tools useful for such a naturalistic approach, and illustrates their use with the case study of ritual activity in a Southern Ethiopian household.Dan Sperber1999-07-16Z2011-03-11T08:53:43Zhttp://cogprints.org/id/eprint/183This item is in the repository with the URL: http://cogprints.org/id/eprint/1831999-07-16ZThe Fifth InfluenceThis article is a theoretical consideration on the role of sensory pleasure and mental joy as optimizers of behavior. It ends with an axiomatic proposal. When they compare the human body to its environment, Philosophers recognise the cosmos as the Large Infinite, and the atomic particles as the Small Infinite. The human brain reaches such a degree of complexity that it may be considered as a third infinite in the universe, a Complex Infinite. It follows that any force capable of moving such an infinite deserves a place among the forces of the universe. Physicists have recognized four forces, the gravitational, the electromagnetic, the weak, and the strong nuclear force. Forces are defined in four dimentions (reversible or not in time) and it is postulated that these forces are valid and applicable everywhere. Pleasure and displeasure, the affective axis of consciousness, can move the infinitely complex into action and no human brain can avoid the trend to maximize its pleasure. Therefore, we suggest, axiomatically, that the affective capability of consciousness operates in a way similar to the four forces of the Physics, i.e. influences the behavior of conscious agents in a way similar to the way the four forces influence masses and particles. However, since a mental phenomenon is dimensioneless we propose to call the affective capability of consciousness the fifth influence rather than the fifth force.Michel CabanacRemi A. CabanacHarold T. Hammel2001-02-05Z2011-03-11T08:54:29Zhttp://cogprints.org/id/eprint/1272This item is in the repository with the URL: http://cogprints.org/id/eprint/12722001-02-05ZFundamental Laws and the Completeness of PhysicsThe status of fundamental laws is an important issue when deciding between the three broad ontological options of fundamentalism (of which the thesis that physics is complete is typically a sub-type), emergentism, and disorder or promiscuous realism. Cartwrights assault on fundamental laws which argues that such laws do not, and cannot, typically state the facts, and hence cannot be used to support belief in a fundamental ontological order, is discussed in this context. A case is made in defence of a moderate form of fundamentalism, which leaves open the possibility of emergentism, but sets itself against the view that our best ontology is disordered. The argument, taking its cue from Bhaskar, relies on a consideration of the epistemic status of experiments, and the question of the possible generality of knowledge gained in unusual or controlled environments.David Jon Spurrett1999-08-24Z2011-03-11T08:53:52Zhttp://cogprints.org/id/eprint/390This item is in the repository with the URL: http://cogprints.org/id/eprint/3901999-08-24ZHow to Refute Principles of Sufficient ReasonOutlines a conceptual argument against the Principle of Sufficient reason. The argument is presented in detail in earlier work, and is based on deductive inferences from PSR's own concept of explanation. The argument shows that not everything can have an explanation of the sort claimed by PSR. So far from being a presupposition of reason itself, as some think, PSR can be refuted by reason, arguing only from PSR's own concept of explanation. Hence PSR cannot be used to argue that there must be some explanation or reason for existence, invisible at least to science, or that because we do not or cannot know the explanation, there must be irreducible mystery about why there is anything at all rather than nothing, including why there was a Big Bang in the first place.John F. Post1998-06-19Z2011-03-11T08:53:49Zhttp://cogprints.org/id/eprint/332This item is in the repository with the URL: http://cogprints.org/id/eprint/3321998-06-19ZInnateness Is Canalization: In Defense of a Developmental Account of InnatenessLorenz proposed in his (1935) articulation of a theory of behavioral instincts that the objective of ethology is to distinguish behaviors that are innate from behaviors that are learned (or acquired). Lorenzs motive was to open the investigation of certain adaptive behaviors to evolutionary theorizing. Accordingly, since innate behaviors are genetic, they are open to such investigation. By Lorenzs light an innate/acquired or learned dichotomy rested on a familiar Darwinian distinction between genes and environments. Ever since Lorenz, ascriptions of innateness have become widespread in the cognitive, behavioral, and biological sciences. The trend continues despite decades of strong arguments that show, in particular, the dichotomy that Lorenz invoked in his theory of behavioral instincts is literally false: no biological trait is the product of genes alone. Some critics suggest that the failure of Lorenzs account shows that innateness is not well-defined in biology and the practice of ascribing innateness to various biological traits should be dropped from respectable science. Elsewhere (Ariew 1996) I argued that despite the arguments of critics, there really is a biological phenomenon underlying the concept of innateness. On my view, innateness is best understood in terms of C.H. Waddingtons concept of canalization, i.e. the degree to which a trait is innate is the degree to which its developmental outcome is canalized. The degree to which a developmental outcome is canalized is the degree to which the developmental process is bound to produce a particular endstate despite environmental fluctuations both in the developments initial state and during the course of development. The canalization account differs in many ways to the traditional ways that ethologists such as Konrad Lorenz originally understood the concept of innateness. Most importantly, on the canalization account the distinction between innate and acquired is not a dichotomy, as Konrad Lorenz had it, but rather a matter of degree difference that lies along a spectrum with highly canalized development outcomes on the one end and highly environmentally sensitive development outcomes on the other end. Nevertheless, I justified the canalization account on the basis of a set of desiderata or criteria that I suggested falls-out of what seemed uncontroversial about Lorenzs account of innateness (briefly): innateness is a property of a developing individual, innateness denotes environmental stability, and innate-ascriptions are useful in certain natural selection explanations (more below). From that same set of desiderata I argued (in my 1996) that neither the concept of heritability nor of norms of reactionstwo concepts from population geneticssuffice to ground innateness. In this essay, I wish to provide further support of the canalization account in two ways. First, I wish to better motivate the desiderata by revisiting a debate between Konrad Lorenz and Daniel Lehrman over the meaning and explanatory usefulness of innate ascriptions in ethology. Second, I wish to compare my canalization account of innateness with accounts proposed by contemporary philosophers, one by Stephen Stich (1975), another by Elliott Sober (forthcoming), and a third by William Wimsatt (1986).Andre Ariew1999-07-12Z2011-03-11T08:53:52Zhttp://cogprints.org/id/eprint/387This item is in the repository with the URL: http://cogprints.org/id/eprint/3871999-07-12ZIntersubjective ScienceThe study of consciousness in modern science is hampered by deeply ingrained, dualist presuppositions about the nature of consciousness. In particular, conscious experiences are thought to be private and subjective, contrasting with physical phenomena which are public and objective. In the present article, I argue that all observed phenomena are, in a sense, private to a given observer, although there are some events to which there is public access. Phenomena can be objective in the sense of intersubjective, investigators can be objective in the sense of truthful or dispassionate, and procedures can be objective in being well-specified, but observed phenomena cannot be objective in the sense of being observer-free. Phenomena are only repeatable in the sense that they are judged by a community of observers to be tokens of the same type. Stripped of its dualist trappings the empirical method becomes if you carry out these procedures you will observe or experience these results - which applies as much to a science of consciousness as it does to physics.Max Velmans2001-08-30Z2011-03-11T08:54:47Zhttp://cogprints.org/id/eprint/1778This item is in the repository with the URL: http://cogprints.org/id/eprint/17782001-08-30ZThe Pragmatic Roots of ContextWhen modelling complex systems one can not include all the causal factors, but one has to settle for partial models. This is alright if the factors left out are either so constant that they can be ignored or one is able to recognise the circumstances when they will be such that the partial model applies. The transference of knowledge from the point of application to the point of learning utilises a combination of recognition and inference a simple model of the important features is learnt and later situations where inferences can be drawn from the model are recognised. Context is an abstraction of the collection of background features that are later recognised. Different heuristics for recognition and model formulation will be effective for different learning tasks. Each of these will lead to a different type of context. Given this, there are (at least) two ways of modelling context: one can either attempt to investigate the contexts that arise out of the heuristics that a particular agent actually applies (the `internal' approach); or (if this is feasible) one can attempt to model context using the external source of regularity that the heuristics exploit. There are also two basic methodologies for the investigation of context: a top-down (or `foundationalist') approach where one tries to lay down general, a priori principles and a bottom-up (or `scientific') approach where one can try and find what sorts of context arise by experiment and simulation. A simulation is exhibited which is designed to illustrate the practicality of the bottom-up approach in elucidating the sorts of internal context that arise in an artificial agent which is attempting to learn simple models of a complex environment. It ends with a plea for the cooperation of the AI and Machine Learning communities as both learning and inference is needed if context is to make complete sense.Bruce Edmonds1999-07-14Z2011-03-11T08:53:52Zhttp://cogprints.org/id/eprint/388This item is in the repository with the URL: http://cogprints.org/id/eprint/3881999-07-14ZSubjekt und Selbstmodell. Die Perspektivität phänomenalen Bewußtseins vor dem Hintergrund einer naturalistischen Theorie mentaler RepräsentationThis book contains a representationalist theory of self-consciousness and of the phenomenal first-person perspective. It draws on empirical data from the cognitive and neurosciences.Thomas K. Metzinger2000-02-09Z2011-03-11T08:53:41Zhttp://cogprints.org/id/eprint/139This item is in the repository with the URL: http://cogprints.org/id/eprint/1392000-02-09ZThe theory of the organism-environment system: III. Role of efferent influences on receptors in the formation of knowledge.The present article is an attempt to give - in the frame of the theory of the organism-environment system (Jarvilehto 1998a) - a new interpretation to the role of efferent influences on receptor activity and to the functions of senses in the formation of knowledge. It is argued, on the basis of experimental evidence and theoretical considerations, that the senses are not transmitters of environmental information, but they create a direct connection between the organism and the environment, which makes the development of a dynamic living system, the organism-environment system, possible. In this connection process the efferent influences on receptor activity are of particular significance, because with their help the receptors may be adjusted in relation to the parts of the environment which are most important in the achievement of behavioral results. Perception is the process of joining of new parts of the environment to the organism-environment system; thus, the formation of knowledge by perception is based on reorganization (widening and differentiation) of the organism-environment system, and not on transmission of information from the environment. With the help of the efferent influences on receptors each organism creates its own peculiar world which is simultaneously subjective and objective. The present considerations have far reaching influences as well on experimental work in neurophysiology and psychology of perception as on philosophical considerations of knowledge formation.Timo Jarvilehto1999-10-18Z2011-03-11T08:53:53Zhttp://cogprints.org/id/eprint/394This item is in the repository with the URL: http://cogprints.org/id/eprint/3941999-10-18ZVisualizing practical knowledge: The Haughton-Mars ProjectTo improve how we envision knowledge, we must improve our ability to see knowledge in everyday life. That is, visualization is concerned not only with displaying facts and theories, but also with finding ways to express and relate tacit understanding. Such knowledge, although often referred to as "common," is not necessarily shared and may be distributed socially in choreographies for working togetherin the manner that a chef and a maitre dhôtel, who obviously possess very different skills, coordinate their work. Furthermore, non-verbal concepts cannot in principle be inventoried. Reifying practical knowledge is not a process of converting the implicit into the explicit, but pointing to what we know, showing its manifestations in our everyday life. To this end, I illustrate the study and reification of practical knowledge by examining the activities of a scientific expedition in the Canadian Arctica group of scientists preparing for a mission to MarsWilliam J. Clancey1998-06-18Z2011-03-11T08:54:12Zhttp://cogprints.org/id/eprint/690This item is in the repository with the URL: http://cogprints.org/id/eprint/6901998-06-18ZFacial beauty and fractal geometryWhat is it that makes a face beautiful? Average faces obtained by photographic (Galton 1878) or digital (Langlois & Roggman 1990) blending are judged attractive but not optimally attractive (Alley & Cunningham 1991) --- digital exaggerations of deviations from average face blends can lead to higher attractiveness ratings (Perrett, May, & Yoshikawa 1994). My novel approach to face design does not involve blending at all. Instead, the image of a female face with high ratings is composed from a fractal geometry based on rotated squares and powers of two. The corresponding geometric rules are more specific than those previously used by artists such as Leonardo and Duerer. They yield a short algorithmic description of all facial characteristics, many of which are compactly encodable with the help of simple feature detectors similar to those found in mammalian brains. This suggests that a face's beauty correlates with simplicity relative to the subjective observer's way of encoding it.Juergen Schmidhuber1998-06-22Z2011-03-11T08:54:12Zhttp://cogprints.org/id/eprint/695This item is in the repository with the URL: http://cogprints.org/id/eprint/6951998-06-22ZThe evolution of what?There is now a huge amount of interest in consciousness among scientists as well as philosophers, yet there is so much confusion and ambiguity in the claims and counter-claims that it is hard to tell whether any progress is being made. This ``position paper'' suggests that we can make progress by temporarily putting to one side questions about what consciousness is or which animals or machines have it or how it evolved. Instead we should focus on questions about the sorts of architectures that are possible for behaving systems and ask what sorts of capabilities, states and processes, might be supported by different sorts of architectures. We can then ask which organisms and machines have which sorts of architectures. This combines the standpoint of philosopher, biologist and engineer. If we can find a general theory of the variety of possible architectures (a characterisation of ``design space'') and the variety of environments, tasks and roles to which such architectures are well suited (a characterisation of ``niche space'') we may be able to use such a theory as a basis for formulating new more precisely defined concepts with which to articulate less ambiguous questions about the space of possible minds. For instance our initially ill-defined concept (``consciousness'') might split into a collection of more precisely defined concepts which can be used to ask unambiguous questions with definite answers. As a first step this paper explores a collection of conjectures regarding architectures and their evolution. In particular we explore architectures involving a combination of coexisting architectural levels including: (a) reactive mechanisms which evolved very early, (b) deliberative mechanisms which evolved later in response to pressures on information processing resources and (c) meta-management mechanisms that can explicitly inspect evaluate and modify some of the contents of various internal information structures. It is conjectured that in response to the needs of these layers, perceptual and action subsystems also developed layers, and also that an ``alarm'' system which initially existed only within the reactive layer may have become increasingly sophisticated and extensive as its inputs and outputs were linked to the newer layers. Processes involving the meta-management layer in the architecture could explain the origin of the notion of ``qualia''. Processes involving the ``alarm'' mechanism and mechanisms concerned with resource limits in the second and third layers gives us an explanation of three main forms of emotion, helping to account for some of the ambiguities which have bedevilled the study of emotion. Further theoretical and practical benefits may come from further work based on this design-based approach to consciousness. A deeper longer term implication is the possibility of a new science investigating laws governing possible trajectories in design space and niche space, as these form parts of high order feedback loops in the biosphere.Aaron Sloman1998-10-19Z2011-03-11T08:53:51Zhttp://cogprints.org/id/eprint/362This item is in the repository with the URL: http://cogprints.org/id/eprint/3621998-10-19ZTHE THEORY OF THE ORGANISM-ENVIRONMENT SYSTEM: I. DESCRIPTION OF THE THEORYThe theory of the organism-environment system starts with the proposition that in any functional sense organism and environment are inseparable and form only one unitary system. The organism cannot exist without the environment and the environment has descriptive properties only if it is connected to the organism. Although for practical purposes we do separate organism and environment, this common-sense starting point leads in psychological theory to problems which cannot be solved. Therefore, separation of organism and environment cannot be the basis of any scientific explanation of human behavior. The theory leads to a reinterpretation of basic problems in many fields of inquiry and makes possible the definition of mental phenomena without their reduction either to neural or biological activity or to separate mental functions. According to the theory, mental activity is activity of the whole organism-environment system, and the traditional psychological concepts describe only different aspects of organisation of this system. Therefore, mental activity cannot be separated from the nervous system, but the nervous system is only one part of the organismenvironment system. This problem will be dealt with in detail in the second part of the article.Timo Jarvilehto1998-07-14Z2011-03-11T08:53:51Zhttp://cogprints.org/id/eprint/353This item is in the repository with the URL: http://cogprints.org/id/eprint/3531998-07-14ZLanguage of Thought Hypothesis: State of the ArtThe Language of Thought Hypothesis (LOTH) is an empirical thesis about thought and thinking. For their explication, it postulates a physically realized system of representations that have a combinatorial syntax (and semantics) such that operations on representations are causally sensitive only to the syntactic properties of representations. According to LOTH, thought is, roughly, the tokening of a representation that has a syntactic (constituent) structure with an appropriate semantics. Thinking thus consists in syntactic operations defined over representations. Most of the arguments for LOTH derive their strength from their ability to explain certain empirical phenomena like productivity, systematicity of thought and thinking.Murat Aydede2003-06-06Z2011-03-11T08:54:51Zhttp://cogprints.org/id/eprint/1975This item is in the repository with the URL: http://cogprints.org/id/eprint/19752003-06-06ZConnectionism Reconsidered: Minds, Machines and ModelsIn this paper the issue of drawing inferences about biological cognitive systems on the basis of connectionist simulations is addressed. In particular, the justification of inferences based on connectionist models trained using the backpropagation learning algorithm is examined. First it is noted that a justification commonly found in the philosophical literature is inapplicable. Then some general issues are raised about the relationships between models and biological systems. A way of conceiving the role of hidden units in connectionist networks is then introduced. This, in combination with an assumption about the way evolution goes about solving problems, is then used to suggest a means of justifying inferences about biological systems based on connectionist research.Istvan Berkeley1998-03-13Z2011-03-11T08:53:46Zhttp://cogprints.org/id/eprint/244This item is in the repository with the URL: http://cogprints.org/id/eprint/2441998-03-13ZGoodbye to ReductionismTo understand consciousness we must first describe what we experience accurately. But oddly, current dualist vs reductionist debates characterise experience in ways which do not correspond to ordinary experience. Indeed, there is no other area of enquiry where the phenomenon to be studied has been so systematically misdescribed. Given this, it is hardly surprising that progress towards understanding the nature of consciousness has been limited.Max Velmans1999-01-17Z2011-03-11T08:53:52Zhttp://cogprints.org/id/eprint/378This item is in the repository with the URL: http://cogprints.org/id/eprint/3781999-01-17ZIntroductionIs there a universal biolinguistic disposition for the development of "basic" colour words? This question has been a subject of debate since Brent Berlin and Paul Kay's BASIC COLOR TERMS: THEIR UNIVERSALITY AND EVOLUTION was published in 1969. NAMING THE RAINBOW is the first extended study of this debate. The author describes and criticizes empirically and conceptually unified models of colour naming that relate basic colour terms directly to perceptual and ultimately to physiological facts, arguing that this strategy has overlooked the cognitive dimension of colour naming. He proposes a psychosemantics for basic colour terms which is sensitive to cultural difference and to the nature and structure of non-linguistic experience. Contemporary colour naming research is radically interdisciplinary and NAMING THE RAINBOW will be of interest to philosophers, psychologists, anthropologists, and cognitive scientists concerned with: biological constraints on cognition and categorization; problems inherent in cross-cultural and in interdisciplinary science; the nature and extent of cultural relativism.Don Dedrick1998-06-22Z2011-03-11T08:54:12Zhttp://cogprints.org/id/eprint/694This item is in the repository with the URL: http://cogprints.org/id/eprint/6941998-06-22ZThe ``Semantics'' of Evolution: Trajectories and Trade-offs in Design Space and Niche SpaceThis paper attempts to characterise a unifying overview of the practice of software engineers, AI designers, developers of evolutionary forms of computation, designers of adaptive systems, etc. The topic overlaps with theoretical biology, developmental psychology and perhaps some aspects of social theory. Just as much of theoretical computer science follows the lead of engineering intuitions and tries to formalise them, there are also some important emerging high level cross disciplinary ideas about natural information processing architectures and evolutionary mechanisms and that can perhaps be unified and formalised in the future. There is some speculation about the evolution of human cognitive architectures and consciousness.Aaron Sloman1998-07-09Z2011-03-11T08:54:13Zhttp://cogprints.org/id/eprint/716This item is in the repository with the URL: http://cogprints.org/id/eprint/7161998-07-09ZThe ``Semantics'' of Evolution: Trajectories and Trade-offs in Design Space and Niche Space.This paper attempts to characterise a unifying overview of the practice of software engineers, AI designers, developers of evolutionary forms of computation, designers of adaptive systems, etc. The topic overlaps with theoretical biology, developmental psychology and perhaps some aspects of social theory. Just as much of theoretical computer science follows the lead of engineering intuitions and tries to formalise them, there are also some important emerging high level cross disciplinary ideas about natural information processing architectures and evolutionary mechanisms and that can perhaps be unified and formalised in the future. There is some speculation about the evolution of human cognitive architectures and consciousness.A. Sloman1999-04-01Z2011-03-11T08:54:02Zhttp://cogprints.org/id/eprint/533This item is in the repository with the URL: http://cogprints.org/id/eprint/5331999-04-01ZSocial Embeddedness and Agent DevelopmentTwo different reasons for using agents are distinguished: the `engineering' perspective and the `social simulation' perspective. It is argued that this entails some differences in approach. In particular the former will want to prevent unpredictable emergent features of their agent populations whilst the later will want to use simulation to study precisely this phenomena. A concept of `social embeddedness' is explicated which neatly distinguishes the two approaches. It is argued that such embedding in a society is an essential feature of being a truly social agent. This has the consequence that such agents will not sit well within an `engineering' methodology.Bruce Edmonds2001-03-15Z2011-03-11T08:54:36Zhttp://cogprints.org/id/eprint/1370This item is in the repository with the URL: http://cogprints.org/id/eprint/13702001-03-15ZThe Thoroughly Modern Aristotle: Was He Really a Functionalist?In recent years a debate has developed over whether Aristotle's theory of the psuchê is properly characterized as having been "functionalist" in the sense that contemporary computational cognitive scientists claim to be adherents of that position. It is argued here that there are indeed some similarities between Aristotle's theory and that of contemporary functionalists, but that the differences between them make it misleading, at best, for functionalists to look to Aristotle for ancient support. In particular, it is argued that Aristotle would not have -- indeed, specifically did not -- support the claim, central to functionalism, that the mind can, in principle, be transported from one body to another simply by instantiating in the new body some set of organizational properties that were instantiated in the old.Christopher D. Green1998-12-15Z2011-03-11T08:53:52Zhttp://cogprints.org/id/eprint/373This item is in the repository with the URL: http://cogprints.org/id/eprint/3731998-12-15ZColour categorization and the space between perception and language.We need to reconsider and reconceive the path that will take us from innate perceptual saliencies to basic (and perhaps other) colour language. There is a space between the perceptual and the linguistic levels that needs to be filled by an account of the rules that people use to generate relatively stable reference classes in a social context.Don Dedrick1998-06-18Z2011-03-11T08:53:49Zhttp://cogprints.org/id/eprint/328This item is in the repository with the URL: http://cogprints.org/id/eprint/3281998-06-18ZIndividualism and Evolutionary Psychology (or: In Defense of 'Narrow' Functions)Millikan and Wilson argue, for different reasons, that the essential reference to the environment in adaptationist explanations of behavior makes (psychological) individualism inconsistent with evolutionary psychology. I show that their arguments are based on misinterpretations of the role of reference to the environment in such explanations. By exploring these misinterpretations, I develop an account of explanation in evolutionary psychology that is fully consistent with individualism. This does not, however, constitute a full-fledged defense of individualism, since evolutionary psychology is only one explanatory paradigm among many in psychology.David J. Buller1998-06-06Z2011-03-11T08:54:11Zhttp://cogprints.org/id/eprint/671This item is in the repository with the URL: http://cogprints.org/id/eprint/6711998-06-06ZAbductive reasoning: Logic, visual thinking, and coherenceThis paper discusses abductive reasoning---that is, reasoning in which explanatory hypotheses are formed and evaluated. First, it criticizes two recent formal logical models of abduction. An adequate formalization would have to take into account the following aspects of abduction: explanation is not deduction; hypotheses are layered; abduction is sometimes creative; hypotheses may be revolutionary; completeness is elusive; simplicity is complex; and abductive reasoning may be visual and non-sentential. Second, in order to illustrate visual aspects of hypothesis formation, the paper describes recent work on visual inference in archaeology. Third, in connection with the evaluation of explanatory hypotheses, the paper describes recent results on the computation of coherence.P. ThagardC. P. Shelley1998-05-08Z2011-03-11T08:54:10Zhttp://cogprints.org/id/eprint/658This item is in the repository with the URL: http://cogprints.org/id/eprint/6581998-05-08ZThe Analogical MindWe examine the use of analogy in human thinking from the perspective of a multiconstraint theory, which postulates three basic types of constraints: similarity, structure and purpose. The operation of these constraints is apparent in both laboratory experiments on analogy and in naturalistic settings, including politics, psychotherapy, and scientific research. We sketch how the multiconstraint theory can be implemented in detailed computational simulations of the analogical human mind.Keith J. HolyoakP. Thagard1998-06-06Z2011-03-11T08:54:10Zhttp://cogprints.org/id/eprint/667This item is in the repository with the URL: http://cogprints.org/id/eprint/6671998-06-06ZCoherent and Creative Conceptual Combinations.Conceptual combinations range from the utterly mundane to the sublimely creative. Mundane combinations include a myriad of adjective-noun and noun-noun juxtapositions that crop up in everyday speaking and writing, such as blue car, cooked carrots, and radio phone. Creative combinations include some of the most important theoretical constructions in science, such as sound wave, bacterial infection, and natural selection. Both mundane and creative conceptual combinations are essential to our attempts to make sense of the world and people's utterances about it. This paper will show how the various aspects of conceptual combination discussed by Hampton (this volume), Shoben and Gagné (this volume) and Wisniewski (this volume) can be unified by seeing them as parts of a general coherence mechanism that people use to make sense of what they see and hear.Paul. Thagard1998-06-16Z2011-03-11T08:53:49Zhttp://cogprints.org/id/eprint/325This item is in the repository with the URL: http://cogprints.org/id/eprint/3251998-06-16ZComputational and dynamical models of mindVan Gelder (1995) has recently spearheaded a movement to challenge the dominance of connectionist and classicist models in cognitive science. The dynamical conception of cognition is van Gelder's replacement for the computation bound paradigms provided by connectionism and classicism. He relies on the Watt Governor to fulfill the role of a dynamicist Turing Machine and claims that the Motivational Oscillatory Theory (MOT) provides a sound empirical basis for dynamicism. In other words, the Watt Governor is to be the theoretical exemplar of the class of systems necessary for cognition and MOT is an empirical instantiation of that class. However, I shall argue that neither the Watt Governor nor MOT successfully fulfill these prescribed roles. This failure, along with van Gelder's peculiar use of the concept of computation and his struggle with representationalism prevent him from providing a convincing alternative to current cognitive theories.Chris Eliasmith1998-06-06Z2011-03-11T08:54:11Zhttp://cogprints.org/id/eprint/670This item is in the repository with the URL: http://cogprints.org/id/eprint/6701998-06-06ZInference to the best plan: A coherence theory of decision.In their introduction to this volume, Ram and Leake usefully distinguish between task goals and learning goals. Task goals are desired results or states in an external world, while learning goals are desired mental states that a learner seeks to acquire as part of the accomplishment of task goals. We agree with the fundamental claim that learning is an active and strategic process that takes place in the context of tasks and goals (see also Holland, Holyoak, Nisbett, and Thagard, 1986). But there are important questions about the nature of goals that have rarely been addressed. First, how can a cognitive system deal with incompatible task goals? Someone may want both to get lots of research done and to relax and have fun with his or her friends. Learning how to accomplish both these tasks will take place in the context of goals that cannot be fully realized together. Second, how are goals chosen in the first place and why are some goals judged to be more important than others? People do not simply come equipped with goals and priorities: we sometimes have to learn what is important to us by adjusting the importance of goals in the context of other compatible and incompatible goals. This paper presents a theory and a computational model of how goals can be adopted or rejected in the context of decision making. In contrast to classical decision theory, it views decision making as a process not only of choosing actions but also of evaluating goals. Our theory can therefore be construed as concerned with the goal-directed learning of goals.P. ThagardE. Millgram1998-06-07Z2011-03-11T08:54:11Zhttp://cogprints.org/id/eprint/674This item is in the repository with the URL: http://cogprints.org/id/eprint/6741998-06-07ZInternet Epistemology: Contributions of New Information Technologies to Scientific ResearchInternet technologies, including electronic mail, preprint archives, and the World Wide Web, are now ubiquitous parts of scientific practice. After reviewing the full range of these technologies and sketching the history of their development, this paper provides an epistemological appraisal of their contributions to scientific research. It uses Alvin Goldman's epistemic criteria of reliability, power, fecundity, speed and efficiency to evaluate the largely positive impact of Internet technologies on the development of scientific knowledge.P. Thagard2000-03-03Z2011-03-11T08:53:53Zhttp://cogprints.org/id/eprint/402This item is in the repository with the URL: http://cogprints.org/id/eprint/4022000-03-03ZIntuitive and reflective beliefsHumans have two kinds of beliefs, intuitive beliefs and reflective beliefs. Intuitive beliefs are a most fundamental category of cognition, defined in the architecture of the mind. They are formulated in an intuitive mental lexicon. Humans are also capable of entertaining an indefinite variety of higher-order or "reflective" propositional attitudes, many of which are of a credal sort. Reasons to hold "reflective beliefs" are provided by other beliefs that describe the source of the reflective belief as reliable, or that provide explicit arguments in favour of the reflective belief. The mental lexicon of reflective beliefs includes not only intuitive, but also reflective concepts.Dan Sperber1998-09-29Z2011-03-11T08:53:51Zhttp://cogprints.org/id/eprint/361This item is in the repository with the URL: http://cogprints.org/id/eprint/3611998-09-29ZIs the Mind Real ?The mind as a whole escapes objective studies because belief in mind- independent reality is self-contradictory and by definition excludes subjective experience (awareness, 'consciousness') from reality. The mind's center therefore vanishes in studies which imply exclusive objectivism or empiricism. This conceptual difficulty can be counteracted by acknowledging that all mental and world structures arise within an unstructured origin- and-matrix for knowledge-structures and beliefs. The mind's structure is thus at the center of reality. Use of such a zero-structure reference can also help to clarify some related conceptual difficulties and to bridge the gap between the 'two cultures'.Herbert FJ Muller1998-06-06Z2011-03-11T08:54:11Zhttp://cogprints.org/id/eprint/669This item is in the repository with the URL: http://cogprints.org/id/eprint/6691998-06-06ZMaking sense of people: Coherence mechanisms.When trying to make sense of other people and ourselves, we may rely on several different kinds of cognitive processes. First, we form impressions of other people by integrating information contained in concepts that represent their traits, their behaviors, our stereotypes of the social groups they belong to, and any other information about them that seems relevant. For example, your impression of an acquaintance may be a composite of personality traits (e.g., friendly, independent), behaviors (e.g., told a joke, donated money to the food bank), and social stereotypes (e.g., woman, doctor, Chinese). Second, we understand other people by means of causal attributions in which we form and evaluate hypotheses that explain their behavior. To explain why someone is abrupt on one occasion, you may hypothesize that this person is impatient or that he or she is under pressure from a work deadline. You believe the hypothesis that provides the best available explanation of the person's behavior. A third means of making sense of people is analogy: You can understand people through their similarity to other people or to yourself. For example, you may understand the stresses that your friend is experiencing by remembering an occasion when you yourself experienced similar stresses. This will allow you to predict your friend's likely feelings and behavior.P ThagardZ. Kunda1998-06-07Z2011-03-11T08:54:11Zhttp://cogprints.org/id/eprint/675This item is in the repository with the URL: http://cogprints.org/id/eprint/6751998-06-07ZMedical analogies: Why and How?This paper describes the purposes served by medical analogies (why they are used) and the different cognitive processes that support those purposes (how they are used). Historical and contemporary examples illustrate the theoretical, experimental, diagnostic, therapeutic, technological, and educational value of medical analogies. Four models of analogical transfer illuminate how analogies are used in these cases.P. Thagard1998-06-15Z2011-03-11T08:53:48Zhttp://cogprints.org/id/eprint/313This item is in the repository with the URL: http://cogprints.org/id/eprint/3131998-06-15ZObjective, subjective and intersubjective selectors of knowledgeIt is argued that the acceptance of knowledge in a community depends on several, approximately independent selection "criteria". The objective criteria are distinctiveness, invariance and controllability, the subjective ones are individual utility, coherence, simplicity and novelty, and the intersubjective ones are publicity, expressivity, formality, collective utility, conformity and authority. Science demarcates itself from other forms of knowledge by explicitly controlling for the objective criteria.Francis Heylighen1998-06-06Z2011-03-11T08:54:11Zhttp://cogprints.org/id/eprint/668This item is in the repository with the URL: http://cogprints.org/id/eprint/6681998-06-06ZProbabilistic Networks and Explanatory Coherence.When surprising events occur, people naturally try to generate explanations of them. Such explanations usually involve hypothesizing causes that have the events as effects. Reasoning from effects to prior causes is found in many domains, including: Social reasoning: when friends are acting strange, we conjecture about what might be bothering them. Legal reasoning: when a crime has been committed, jurors must decide whether the prosecution's case gives a convincing explanation of the evidence. Medical diagnosis: given a set of symptoms, a physician tries to decide what disease or diseases produced them. Fault diagnosis in manufacturing: when a piece of equipment breaks down, a trouble shooter must try to determine the cause of the breakdown. Scientific theory evaluation: scientists seek an acceptable theory to explain experimental evidence. What is the nature of such reasoning?Paul. Thagard1998-07-02Z2011-03-11T08:53:50Zhttp://cogprints.org/id/eprint/346This item is in the repository with the URL: http://cogprints.org/id/eprint/3461998-07-02ZReview of Brandon's "Concepts and Methods in Evolutionary Biology"This book is a collection of essays by a leading philosopher of biology and spans his career over almost the last twenty years. Most of the topics that have been of concern to philosophers of biology in this period are touched on to some extent, and the collection of these essays in a convenient volume will certainly be welcomed by everyone working in this field. The essays are arranged chronologically, and divided into three sections. Although the chapters in the first section have substantial interconnections, being involved with fundamental conceptual issues in evolutionary theory, on the whole there is not much attempt to tie the book into anything more than a sequence of independent essays. The later sections, moreover, cover a quite diverse range of topics. The whole is neither more nor less than the sum of the parts. ForJ. Dupre1998-06-07Z2011-03-11T08:54:11Zhttp://cogprints.org/id/eprint/677This item is in the repository with the URL: http://cogprints.org/id/eprint/6771998-06-07ZUlcers and Bacteria I: Discovery and AcceptanceIn 1983, Dr. J. Robin Warren and Dr. Barry Marshall reported finding a new kind of bacteria in the stomachs of people with gastritis. Warren and Marshall were soon led to the hypothesis that peptic ulcers are generally caused, not by excess acidity or stress, but by a bacterial infection. Initially, this hypothesis was viewed as preposterous, and it is still somewhat controversial. In 1994, however, a U. S. National Institutes of Health Consensus Development Panel concluded that infection appears to play an important contributory role in the pathogenesis of peptic ulcers, and recommended that antibiotics be used in their treatment. Peptic ulcers are common, affecting up to 10% of the population, and evidence has mounted that many ulcers can be cured by eradicating the bacteria responsible for them.P. Thagard1998-06-07Z2011-03-11T08:54:11Zhttp://cogprints.org/id/eprint/678This item is in the repository with the URL: http://cogprints.org/id/eprint/6781998-06-07ZUlcers and bacteria II: Instruments, experiments, and social interactionsMy description of the cognitive processes involved in the discovery, development, and acceptance of the bacterial theory of ulcers might have left the impression that science is all in the mind (Thagard, forthcoming-b). But only part of the story of the bacterial theory of ulcers is psychological. This paper discusses the important role of physical interaction with the world by means of instruments and experiments, and the equally important role of social interactions among the medical researchers who developed the theory. The main questions I want to answer are the following: 1. What instruments contributed to the development and acceptance of the new theory? 2. What kinds of experiments contributed to the development and acceptance of the new theory? 3. How did theorizing and experimentation interact in the development of new experiments and hypotheses? 4. How did social processes such as collaboration, communication, and consensus contribute to the development and widespread acceptance of the bacterial theory of ulcers? I conclude with a sketch of science as a complex system of interacting psychological, physical, and social processes.P. Thagard1998-05-08Z2011-03-11T08:53:48Zhttp://cogprints.org/id/eprint/294This item is in the repository with the URL: http://cogprints.org/id/eprint/2941998-05-08ZWaves, Particles, and Explanatory CoherencePeter Achinstein (1990, 1991) analyses the scientific debate that took place in the eighteenth and nineteenth centuries concerning the nature of light. He offers a probabilistic account of the methods employed by both particle theorists and wave theorists, and rejects any analysis of this debate in terms of coherence. He characterizes coherence through reference to William Whewell's writings concerning how "consilience of inductions" establishes an acceptable theory (Whewell, 1847) . Achinstein rejects this analysis because of its vagueness and lack of reference to empirical data, concluding that coherence is insufficient to account for the belief change that took place during the wave-particle debate.C. EliasmithP. Thagard2001-08-30Z2011-03-11T08:54:46Zhttp://cogprints.org/id/eprint/1774This item is in the repository with the URL: http://cogprints.org/id/eprint/17742001-08-30ZPragmatic HolismThe reductionist/holist debate seems an impoverished one, with many participants appearing to adopt a position first and constructing rationalisations second. Here I propose an intermediate position of pragmatic holism, that irrespective of whether all natural systems are theoretically reducible, for many systems it is completely impractical to attempt such a reduction, also that regardless if whether irreducible `wholes' exist, it is vain to try and prove this in absolute terms. This position thus illuminates the debate along new pragmatic lines, and refocusses attention on the underlying heuristics of learning about the natural world.Bruce Edmonds1998-07-18Z2011-03-11T08:54:13Zhttp://cogprints.org/id/eprint/718This item is in the repository with the URL: http://cogprints.org/id/eprint/7181998-07-18ZActual PossibilitiesThis is a philosophical `position paper', starting from the observation that we have an intuitive grasp of a family of related concepts of ``possibility'', ``causation'' and ``constraint'' which we often use in thinking about complex mechanisms, and perhaps also in perceptual processes, which according to Gibson are primarily concerned with detecting positive and negative affordances, such as support, obstruction, graspability, etc. We are able to talk about, think about, and perceive possibilities, such as possible shapes, possible pressures, possible motions, and also risks, opportunities and dangers. We can also think about constraints linking such possibilities. If such abilities are useful to us (and perhaps other animals) they may be equally useful to intelligent artefacts. All this bears on a collection of different more technical topics, including modal logic, constraint analysis, qualitative reasoning, naive physics, the analysis of functionality, and the modelling design processes. The paper suggests that our ability to use knowledge about ``de-re'' modality is more primitive than the ability to use ``de-dicto'' modalities, in which modal operators are applied to sentences. The paper explores these ideas, links them to notions of ``causation'' and ``machine'', suggests that they are applicable to virtual or abstract machines as well as physical machines. Some conclusions are drawn regarding the nature of mind and consciousness.A. Sloman1998-06-22Z2011-03-11T08:54:12Zhttp://cogprints.org/id/eprint/704This item is in the repository with the URL: http://cogprints.org/id/eprint/7041998-06-22ZBeyond Turing EquivalenceWhat is the relation between intelligence and computation? Although the difficulty of defining `intelligence' is widely recognized, many are unaware that it is hard to give a satisfactory definition of `computational' if computation is supposed to provide a non-circular explanation for intelligent abilities. The only well-defined notion of `computation' is what can be generated by a Turing machine or a formally equivalent mechanism. This is not adequate for the key role in explaining the nature of mental processes, because it is too general, as many computations involve nothing mental, nor even processes: they are simply abstract structures. We need to combine the notion of `computation' with that of `machine'. This may still be too restrictive, if some non-computational mechanisms prove to be useful for intelligence. We need a theory-based taxonomy of {\em architectures} and {\em mechanisms} and corresponding process types. Computational machines my turn out to be a sub-class of the machines available for implementing intelligent agents. The more general analysis starts with the notion of a system with independently variable, causally interacting sub-states that have different causal roles, including both `belief-like' and `desire-like' sub-states, and many others. There are many significantly different such architectures. For certain architectures (including simple computers), some sub-states have a semantic interpretation for the system. The relevant concept of semantics is defined partly in terms of a kind of Tarski-like structural correspondence (not to be confused with isomorphism). This always leaves some semantic indeterminacy, which can be reduced by causal loops involving the environment. But the causal links are complex, can share causal pathways, and always leave mental states to some extent semantically indeterminate.Aaron Sloman1998-12-16Z2011-03-11T08:53:52Zhttp://cogprints.org/id/eprint/374This item is in the repository with the URL: http://cogprints.org/id/eprint/3741998-12-16ZCan colour be reduced to anything?C. L. Hardin has argued that the colour opponency of the vision system leads to chromatic subjectivism: chromatic sensory states reduce to neurophysiological states. Much of the force of Hardin's argument derives from a critique of chromatic objectivism. On this view chromatic sensory states are held to reduce to an external property. While I agree with Hardin's critique of objectivism it is far from clear that the problems which beset objectivism do not apply to the subjectivist position as well. I develop a critique of subjectivism that parallels Hardin's anti-objectivist argument.Don Dedrick1998-06-06Z2011-03-11T08:54:11Zhttp://cogprints.org/id/eprint/672This item is in the repository with the URL: http://cogprints.org/id/eprint/6721998-06-06ZThe Concept of Disease: Structure and ChangeBy contrasting Hippocratic and nineteenth century theories of disease, this paper describes important conceptual changes that have taken place in the history of medicine. Disease concepts are presented as causal networks that represent the relations among the symptoms, causes, and treatment of a disease. The transition to the germ theory of disease produced dramatic conceptual changes as the result of a radically new view of disease causation. An analogy between disease and fermentation was important for two of the main developers of the germ theory of disease, Pasteur and Lister. Attention to the development of germ concepts shows the need for a referential account of conceptual change to complement a representational account.P. Thagard2017-02-18T20:22:58Z2017-02-18T20:22:58Zhttp://cogprints.org/id/eprint/7374This item is in the repository with the URL: http://cogprints.org/id/eprint/73742017-02-18T20:22:58ZDisordered views of aggressive children: A late 20th century perspective.Richters, J. E. (1996). Disordered views of antisocial children: A late 20th century perspective. In C. F. Ferris & T. Grisso (Eds.), Understanding aggressive behavior in children. Annals of the New York Academy of Sciences (pp. 208-223). New York: New York Academy of Sciences. John E Richtersjohn.richters@gmail.com2001-03-04Z2011-03-11T08:54:35Zhttp://cogprints.org/id/eprint/1341This item is in the repository with the URL: http://cogprints.org/id/eprint/13412001-03-04ZFodor, functions, physics, and fantasyland: Is AI a Mickey Mouse discipline?It is widely held that the methods of AI are the appropriate methods for cognitive science. Fodor, however, has argued that AI bears the same relation to psychology as Disneyland does to physics. This claim is examined in light of the widespread but paradoxical acceptance of the Turing Test--a behavioral criterion of intelligence--among advocates of cognitivism. It is argued that, given the recalcitrance of certain deep conceptual problems in psychology, and disagreements concerning psychology's basic vocabulary, it is unlikely that AI will prove to be very psychologically enlightening until after some consensus on ontological issues in psychology is achieved. Christopher D. Green1998-06-18Z2011-03-11T08:53:49Zhttp://cogprints.org/id/eprint/330This item is in the repository with the URL: http://cogprints.org/id/eprint/3301998-06-18ZInnateness and CanalizationCognitive scientists often employ the notion of innateness without defining it. The issue is, how is innateness defined in biology? Some critics contend that innateness is not a legitimate concept in biology. In this paper I will argue that it is. However, neither the concept of high heritability nor the concept of flat norm of reaction (two popular accounts in the biology literature) define innateness. An adequate account is found in developmental biology. I propose that innateness is best defined in terms of C. H. Waddington's concept of canalization.Andre Ariew1998-03-22Z2011-03-11T08:53:46Zhttp://cogprints.org/id/eprint/249This item is in the repository with the URL: http://cogprints.org/id/eprint/2491998-03-22ZOne Fan's PratfallsWhen Professor Orr published his hostile review of Darwin's Dangerous Idea in the biology journal, Evolution, last February, I was not pleased. His review was full of falsehoods and misconstruals, but I had no recourse; that journal, like most academic journals, does not permit authors to respond to reviews. Luckily for me, Orr has been so eager to warn the world of my errors that he has restated his attack, with embellishments, in the Boston Review, which has invited me to respond. Months have passed, the damage has been done, but at least I get to set the record straight.Daniel C. Dennett1998-07-02Z2011-03-11T08:53:50Zhttp://cogprints.org/id/eprint/345This item is in the repository with the URL: http://cogprints.org/id/eprint/3451998-07-02ZPromiscuous Realism: Reply to WilsonThis paper presents a brief response to Robert A. Wilson's critical discussion of Promiscuous Realism [1996]. I argue that although convergence on a unique conception of species cannot be ruled out, the evidence against such an outcome is stronger than Wilson allows. In addition, given the failure of biological science to come up with a unique and privileged set of biological kinds, the relevance of the various overlapping kinds of ordinary language to the metaphysics of biological kinds is greater than Wilson admits.J. Dupre1998-07-02Z2011-03-11T08:53:50Zhttp://cogprints.org/id/eprint/349This item is in the repository with the URL: http://cogprints.org/id/eprint/3491998-07-02ZReview of Sober's "Philosophy of Biology"Elliott Sober is among the leading contemporary contributors to the philosophy of biology. He also has an exceptional ability to explain difficult ideas clearly. He is therefore very well equipped to provide an accessible yet state-of-the-art introduction to the philosophy of biology, and in most respects this optimistic prognosis is justified by the present volume. Focussing on evolutionary biology, Sober provides a general overview of evolutionary theory; a chapter on creationism that serves as a vehicle for the discussion of the evidence for evolution; and chapters on fitness, the units of selection, adaptationism, systematics, and sociobiology. Sober displays a thorough mastery of both the biological issues and the recent philosophical controversies that have surrounded them, and the presentation is always lucid and free from unnecessary technicalities. Anyone not thoroughly conversant with contemporary discussions of evolutionary theory could learn from this book.J. Dupre1998-07-02Z2011-03-11T08:53:50Zhttp://cogprints.org/id/eprint/348This item is in the repository with the URL: http://cogprints.org/id/eprint/3481998-07-02ZReviewof Sober's "From a Biological Point of View: Essays in Evolutionary Philosophy"Biological knowledge has increased exponentially in the last century or so, and it would be surprising if some of this knowledge did not have implications for philosophy. In contrast with a good deal of Elliott Sober's best known work, which aims to bring philosophical methods to bear on issues within biology, the theme of this collection of essays is to explore some ways in which biological ideas, or more specifically evolutionary ideas, may be brought to bear on philosophical issues. Sober notes in his Introduction that there is no systematic theme beyond the attempt to explore the philosophical implications of taking evolutionary biology seriously. And certainly this is a diverse collection, especially with regard to the extent to which the various arguments of these papers depend on any particular content to evolutionary claims.J. Dupre1998-06-16Z2011-03-11T08:53:49Zhttp://cogprints.org/id/eprint/324This item is in the repository with the URL: http://cogprints.org/id/eprint/3241998-06-16ZThe third contender: A critical examination of the dynamicist theory of cognitionIn a recent series of publications, dynamicist researchers have proposed a new conception of cognitive functioning. This conception is intended to replace the currently dominant theories of connectionism and symbolicism. The dynamicist approach to cognitive modeling employs concepts developed in the mathematical field of dynamical systems theory. They claim that cognitive models should be embedded, low-dimensional, complex, described by coupled differential equations, and non-representational. In this paper I begin with a short description of the dynamicist project and its role as a cognitive theory. Subsequently, I determine the theoretical commitments of dynamicists, critically examine those commitments and discuss current examples of dynamicist models. In conclusion, I determine dynamicism's relation to symbolicism and connectionism and find that the dynamicist goal to establish a new paradigm has yet to be realized.Chris Eliasmith1998-06-22Z2011-03-11T08:54:12Zhttp://cogprints.org/id/eprint/703This item is in the repository with the URL: http://cogprints.org/id/eprint/7031998-06-22ZTowards a Design-Based Analysis of Emotional Episodeshe design-based approach is a methodology for investigating mechanisms capable of generating mental phenomena, whether introspectively or externally observed, and whether they occur in humans, other animals or robots. The study of designs satisfying requirements for autonomous agency can provide new deep theoretical insights at the information processing level of description of mental mechanisms. Designs for working systems (whether on paper or implemented on computers) can systematically explicate old explanatory concepts and generate new concepts that allow new and richer interpretations of human phenomena. To illustrate this, some aspects of human grief are analysed in terms of a particular information processing architecture being explored in our research group. We do not claim that this architecture is part of the causal structure of the human mind; rather, it represents an early stage in the iterative search for a deeper and more general architecture, capable of explaining more phenomena. However even the current early design provides an interpretative ground for some familiar phenomena, including characteristic features of certain emotional episodes, particularly the phenomenon of perturbance (a partial or total loss of control of attention). The paper attempts to expound and illustrate the design-based approach to cognitive science and philosophy, to demonstrate the potential effectiveness of the approach in generating interpretative possibilities, and to provide first steps towards an information processing account of `perturbant', emotional episodes.Ian WrightAaron SlomanLuc Beaudoin1998-06-22Z2011-03-11T08:54:12Zhttp://cogprints.org/id/eprint/702This item is in the repository with the URL: http://cogprints.org/id/eprint/7021998-06-22ZWhat sort of control system is able to have a personality?This paper outlines a design-based methodology for the study of mind as a part of the broad discipline of Artificial Intelligence. Within that framework some architectural requirements for human-like minds are discussed, and some preliminary suggestions made regarding mechanisms underlying motivation, emotions, and personality. A brief description is given of the `Nursemaid' or `Minder' scenario being used at the University of Birmingham as a framework for research on these problems. It may be possible later to combine some of these ideas with work on synthetic agents inhabiting virtual reality environments.Aaron Sloman2001-02-23Z2011-03-11T08:54:28Zhttp://cogprints.org/id/eprint/1179This item is in the repository with the URL: http://cogprints.org/id/eprint/11792001-02-23ZWhere Did the Word "Cognitive" Come From Anyway?Cognitivism is the ascendant movement in psychology these days. It reaches from cognitive psychology into social psychology, personality, psychotherapy, development, and beyond. Few psychologists know the philosophical history of the term, "cognitive," and often use it as though it were completely synonymous with "psychological" or "mental." In this paper, I trace the origins of the term "cognitive" in the ethical theories of the early 20th century, and through the logical positivistic philosophy of science of this century's middle part. In both of these settings, "cognitive" referred not primarily to the psychological but, rather, to the truth-evaluable (i.e., those propositions about which one can say that they are either true or false). I argue that, strictly speaking, cognitivism differs from traditional mentalism in being the study of only those aspects of the mental that can be subjected to truth conditional analysis (or sufficiently similar "conditions of satisfaction"). This excludes traditionally troublesome aspects of the mental such as consciousness, qualia, and (the subjective aspects of) emotion. Although cognitive science has since grown to include the study of some of these phenomena, it is important to recognize that one of the original aims of the cognitivist movement was to re-introduce belief and desire into psychology, while still protecting it from the kinds of criticism that behaviorists had used to bring down full-blown mentalism at the beginning of the century. Christopher D. Green1999-03-17Z2011-03-11T08:53:52Zhttp://cogprints.org/id/eprint/379This item is in the repository with the URL: http://cogprints.org/id/eprint/3791999-03-17ZBeyond Eliminative Materialism: Some unnoticed implications of Paul Churchland's Pragmatic PluralismPaul Churchland's epistemology contains a tension between two positions, which I will call pragmatic pluralism and eliminative materialism. Pragmatic pluralism became predominant as Churchland's epistemology became more neurocomputationally inspired, which saved him from the skepticism implicit in certain passages of the theory of reduction he outlined in Scientific Realism and the Plasticity of Mind. However, once he replaces eliminativism with a neurologically inspired pragmatic pluralism, Churchland 1) cannot claim that folk psychology might be a false theory, in any significant sense 2) cannot claim that the concepts of Folk psychology might be empty of extension and lack reference. 3) cannot sustain Churchland's criticism of Dennett's "intentional stance" . 4) cannot claim to be a form of scientific realism, in the sense of believing that what science describes is somehow realer that what other conceptual systems describe.Teed Rockwell1999-08-25Z2011-03-11T08:54:19Zhttp://cogprints.org/id/eprint/828This item is in the repository with the URL: http://cogprints.org/id/eprint/8281999-08-25ZIn Defense of Experimental Data in a Relativistic MilieuThe objectivity and utility of experimental data as evidential support for knowledge-claims may be found suspect when it is shown that (a) the interpretation of experimental data is inevitably complicated by social factors like experimenter effects, subject effects and demand characteristics, (b) social factors which affect experimental data are themselves sensitive to societal conventions or cultural values, (c) all observations (including experimental observations) are necessarily theory-dependent, and (d) experimental data have limited generality because they are collected in artificial settings. These critiques of experimental data are answered by showing that (i) not all empirical studies are experiments, (ii) experimental methodology is developed to exclude alternate interpretations of data (including explanations in terms of social influences), (iii) theoretical disputes and their settlement take place in the context of a particular frame of reference, and (iv) objectivity can be achieved with observations neutral to the to-be-corroborated theory despite theory-dependent observations if distinctions are made (a) between prior observation and evidential observation and (b) between a to-be-corroborated theory and the theory underlying the identity of evidential response.Siu L. Chow1999-04-13Z2011-03-11T08:53:52Zhttp://cogprints.org/id/eprint/380This item is in the repository with the URL: http://cogprints.org/id/eprint/3801999-04-13ZThe investigation of consciousness through phenomenology and neuroscienceThe principal problem of consciousness is how brain processes cause subjective awareness. Since this problem involves subjectivity, ordinary scientific methods, applicable only to objective phenomena, cannot be used. Instead, by parallel application of phenomenological and scientific methods, we may establish a correspondence between the subjective and the objective. This correspondence is effected by the construction of a theoretical entity, essentially an elementary unit of consciousness, the intensity of which corresponds to electrochemical activity in a synapse. Dendritic networks correspond to causal dependencies between these subjective units. Therefore, the structure of conscious experience is derived from synaptic connectivity. This parallel phenomenal/neural analysis provides a framework for the investigation of a number of problems, including sensory inversions, the unity of consciousness, and the nature of nonhuman consciousness.Bruce J. MacLennan1998-06-22Z2011-03-11T08:54:12Zhttp://cogprints.org/id/eprint/706This item is in the repository with the URL: http://cogprints.org/id/eprint/7061998-06-22ZMusings on the roles of logical and non-logical representations in intelligenceThis paper offers a short and biased overview of the history of discussion and controversy about the role of different forms of representation in intelligent agents. It repeats and extends some of the criticisms of the `logicist' approach to AI that I first made in 1971, while also defending logic for its power and generality. It identifies some common confusions regarding the role of visual or diagrammatic reasoning including confusions based on the fact that different forms of representation may be used at different levels in an implementation hierarchy. This is contrasted with the way in the use of one form of representation (e.g. pictures) can be {\em controlled} using another (e.g. logic, or programs). Finally some questions are asked about the role of metrical information in biological visual systems.Aaron Sloman1998-05-03Z2011-03-11T08:53:47Zhttp://cogprints.org/id/eprint/286This item is in the repository with the URL: http://cogprints.org/id/eprint/2861998-05-03ZOverworking the HippocampusGray mistakenly thinks I have rejected the sort of theoretical enterprise he is undertaking, because, according to him, I think that "more data" is all that is needed to resolve all the issues. Not at all. My stalking horse was the bizarre (often pathetic) claim that no amount of empirical, "third-person point-of-view" science (data plus theory) could ever reduce the residue of mystery about consciousness to zero. This "New Mysterianism" (Flanagan, 1991) is one that he should want to combat as vigorously as I have done.Daniel C. Dennett1998-07-18Z2011-03-11T08:54:13Zhttp://cogprints.org/id/eprint/719This item is in the repository with the URL: http://cogprints.org/id/eprint/7191998-07-18ZA philosophical encounter: An interactive presentation of some of the key philosophical problems in AI and AI problems in philosophyThis paper, along with the following paper by John McCarthy, introduces some of the topics to be discussed at the IJCAI95 event `A philosophical encounter: An interactive presentation of some of the key philosophical problems in AI and AI problems in philosophy.' Philosophy needs AI in order to make progress with many difficult questions about the nature of mind, and AI needs philosophy in order to help clarify goals, methods, and concepts and to help with several specific technical problems. Whilst philosophical attacks on AI continue to be welcomed by a significant subset of the general public, AI defenders need to learn how to avoid philosophically naive rebuttals.A. Sloman1998-03-13Z2011-03-11T08:53:46Zhttp://cogprints.org/id/eprint/246This item is in the repository with the URL: http://cogprints.org/id/eprint/2461998-03-13ZThe Relation of Consciousness to the Material WorldMany of the arguments about how to address the hard versus the easy questions of consciousness put by Chalmers (1995) are similar to ones I have developed in Velmans (1991a,b; 1993a). This includes the multiplicity of mind/body problems, the limits of functional explanation, the need for a nonreductionist approach, and the notion that consciousness may be related to neural/physical representation via a dual-aspect theory of information. But there are also differences. Unlike Chalmers I argue for the use of neutral information processing language for functional accounts rather than the term "awareness." I do not agree that functional equivalence cannot be extricated from phenomenal equivalence, and suggest a hypothetical experiment for doing so - using a cortical implant for blindsight. I argue that not all information has phenomenal accompaniments, and introduce a different form of dual-aspect theory involving "psychological complementarity." I also suggest that the hard problem posed by "qualia" has its origin in a misdescription of everyday experience implicit in dualism.Max Velmans1999-09-02Z2011-03-11T08:53:52Zhttp://cogprints.org/id/eprint/392This item is in the repository with the URL: http://cogprints.org/id/eprint/3921999-09-02ZReview of Jaegwon Kim, Supervenience and Mind"Adaptation properties," as individuated according to evolutionary biology, cannot be reduced to physical properties of the token items that have the adaptation properties. This causes serious if not fatal trouble for several of Kim's crucial theses: the Causal Individuation of Kinds, Weak Supervenience, Alexander's Dictum, the synchronicity thesis (that all psychological kinds supervene on the contemporaneous physical states of the organism), the Correlation Thesis, and indeed his Restricted Correlation Thesis. All these theses are strongly individualist, in the sense of entailing that all a thing's properties are determined by its own physical properties and relations, contrary to many properties in biology and psychology.John F. Post1998-07-02Z2011-03-11T08:53:49Zhttp://cogprints.org/id/eprint/343This item is in the repository with the URL: http://cogprints.org/id/eprint/3431998-07-02ZReview of Kitcher: "The Advancement of Science: Science without Legend, Objectivity without Illusions"Philip Kitcher's book begins with a familiar historical overview. In the 1940s and 50s a confident, optimistic vision of science was widely shared by philosophers and historians of science. The goal of science was to discover the truth about nature, and over the centuries science had advanced steadily towards that goal; science discerned the real kinds of things of which the world was composed and the causal relations between them; the methods of science were rational and its deliverances objective; and so on. Only where science failed in some of these respects was there any need to provide external, that is social, political, or individual, explanations of scientific belief. In the late 50s, and especially subsequent to Kuhn's classic, The Structure of Scientific Revolutions, all this started to change. Historians increasingly insisted that the development of science must be treated just like any other cultural process, which meant embedding the narrative of the growth of scientific knowledge fully in the social and political context in which it occurred. This suggested that the view of science as deriving from a uniquely rational process could no longer be sustained. And, notoriously, Kuhn argued that science could not be seen as cumulative across the most dramatic changes in scientific theory. Though philosophers have never given up entirely on the old optimistic picture ("Legend" as Kitcher refers to it), its influence has steadily waned. For various reasons philosophers have become increasingly concerned over whether one could believe scientific claims to be literally true. And influenced by Duhem's thesis, revived by Quine, that scientific theory must always be underdetermined by empirical evidence, they became more sympathetic to the possibility that scientific belief must be explained, at least in substantial part, by much more than the rational objective processes envisioned by Legend. These philosophical doubts existed in uneasy tension with more extreme tendencies toward thoroughgoing relativism or skepticism, and with the movement in the sociology of science to see the whole concern with truth and falsity as an irrelevant diversion.J. Dupre1998-07-02Z2011-03-11T08:53:50Zhttp://cogprints.org/id/eprint/344This item is in the repository with the URL: http://cogprints.org/id/eprint/3441998-07-02ZReview of Rosenberg's "Instrumental Biology or the Disunity of Science"This book is the apologia of a frustrated reductionist. The frustration derives from Rosenberg's clear perception that the project of physicalist reduction, the reduction of all the sciences of complex objects to physics, is impossible, at least, as he often says, for beings hampered by our limited cognitive and computational abilities. The reductionism that survives this realisation is purely metaphysical. It is the firm commitment to the view that ultimately whatever happens happens because of the universally lawlike behavior of the physical particles of which everything is composed. What holds these theses together is supervenience. The physical correlate of a higher level property or kind is typically massively disjunctive. Thus although the intrinsic properties of a complex thing are fully determined by the properties of the physical particles of which they are composed, the physical property necessary and sufficient to determine such a higher level property is too complex and disjunctive for our feeble minds to grasp. The underlying physical heterogeneity of the properties or kinds we distinguish at higher structural levels is such as to make it vanishingly unlikely that these will enter into the kinds of universal laws characteristic of physics or chemistry.J. Dupre1998-07-02Z2011-03-11T08:53:49Zhttp://cogprints.org/id/eprint/341This item is in the repository with the URL: http://cogprints.org/id/eprint/3411998-07-02ZThe Solution to the Problem of the Freedom of the WillIt has notoriously been supposed that the doctrine of determinism conflicts with the belief in human freedom. Yet it is not readily apparent how indeterminism, the denial of determinism, makes human freedom any less problematic. It has sometimes been suggested that the arrival of quantum mechanics should immediately have solved the problem of free will and determinism. It was proposed, perhaps more often by scientists than by philosophers, that the brain would need only to be fitted with a device for amplifying indeterministic quantum phenomena for the bogey of determinism to be defeated. Acts of free will could then be those that were initiated by such indeterministic nudges. Recently there has been some inclination to revive such a story as part of the fallout from the trend for chaos theory. Chaotic systems in the brain, being indefinitely sensitive to the precise details of initial conditions, seem to provide fine candidates for the hypothetical amplifiers of quantum events.John Dupré1998-08-07Z2011-03-11T08:53:51Zhttp://cogprints.org/id/eprint/357This item is in the repository with the URL: http://cogprints.org/id/eprint/3571998-08-07ZWhat is Complexity? - The philosophy of complexity per se with application to some examples in evolutionIt is argued that complexity has only a limited use as a paradigm against reductionist approaches and that it has a much richer potential as a comparable property. What can complexity be usefully said to be a property of is discussed. It is argued that it is unlikely to have any useful value as applied to real object or systems. Further that even relativising it to an observer has problems. It is proposed that complexity can be only usefully applied to constructions within a given language. It is argued that complexity is usefully differentiated from the concepts of size, ignorance, variety, minimum description length and order. A definition of complexity is proposed which can be summarised as follows: That property of a language expression which makes it difficult to formulate its overall behaviour even when given almost complete information about its atomic components and their inter-relations.. Some of the consequences of this definition are discussed. It is shown that this definition encompasses several existing varieties of complexity measures and is then applied to some examples pertaining to the evolution of complex' systems including: "What is the complexity that has evolved in organisms and has it increased?".Bruce Edmonds1998-08-07Z2011-03-11T08:54:01Zhttp://cogprints.org/id/eprint/510This item is in the repository with the URL: http://cogprints.org/id/eprint/5101998-08-07ZModelling Learning as ModellingEconomists tend to represent learning as a procedure for estimating the parameters of the "correct" econometric model. We extend this approach by assuming that agents specify as well as estimate models. Learning thus takes the form of a dynamic process of developing models using an internal language of representation where expectations are formed by forecasting with the best current model. This introduces a distinction between the form and content of the internal models which is particularly relevant for boundedly rational agents. We propose a framework for such model development which use a combination of measures: the error with respect to past data, the complexity of the model, the cost of finding the model and a measure of the model's specificity The agent has to make various trade-offs between them. A utility learning agent is given as an example.Scott MossBruce Edmonds2005-05-02Z2011-03-11T08:56:03Zhttp://cogprints.org/id/eprint/4340This item is in the repository with the URL: http://cogprints.org/id/eprint/43402005-05-02ZThe Role of Inversion in the Genesis, Development and the Structure of Scientific KnowledgeThe main thrust of the argument of this thesis is to show the possibility of articulating a method of construction or of synthesis--as against the most common method of analysis or division--which has always been (so we shall argue) a necessary component of scientific theorization. This method will be shown to be based on a fundamental synthetic logical relation of thought, that we shall call inversion--to be understood as a species of logical opposition, and as one of the basic monadic logical operators. Thus the major objective of this thesis is to This thesis can be viewed as a response to Larry Laudan's challenge, which is based on the claim that ``the case has yet to be made that the rules governing the techniques whereby theories are invented (if any such rules there be) are the sorts of things that philosophers should claim any interest in or competence at.'' The challenge itself would be to show that the logic of discovery (if at all formulatable) performs the epistemological role of the justification of scientific theories. We propose to meet this challenge head on: a) by suggesting precisely how such a logic would be formulated; b) by demonstrating its epistemological relevance (in the context of justification) and c) by showing that a) and b) can be carried out without sacrificing the fallibilist view of scientific knowledge. OBJECTIVES: We have set three successive objectives: one general, one specific, and one sub-specific, each one related to the other in that very order.
(A) The general objective is to indicate the clear possibility of renovating the traditional analytico-synthetic epistemology. By realizing this objective, we attempt to widen the scope of scientific reason or rationality, which for some time now has perniciously been dominated by pure analytic reason alone. In order to achieve this end we need to show specifically that there exists the possibility of articulating a synthetic (constructive) logic/reason, which has been considered by most mainstream thinkers either as not articulatable, or simply non-existent.
(B) The second (specific) task is to respond to the challenge of Larry Laudan by demonstrating the possibility of an epistemologically significant generativism. In this context we will argue that this generativism, which is our suggested alternative, and the simplified structuralist and semantic view of scientific theories, mutually reinforce each other to form a single coherent foundation for the renovated analytico-synthetic methodological framework.
(C) The third (sub-specific) objective, accordingly, is to show the possibility of articulating a synthetic logic that could guide us in understanding the process of theorization. This is realized by proposing the foundations for developing a logic of inversion, which represents the pattern of synthetic reason in the process of constructing scientific definitions.
Nagarjuna G.1998-07-02Z2011-03-11T08:53:49Zhttp://cogprints.org/id/eprint/342This item is in the repository with the URL: http://cogprints.org/id/eprint/3421998-07-02ZAgainst Scientific ImperialismMost discussion of the unity of science has concerned what might be called vertical relations between theories: the reducibility of biology to chemistry, or chemistry to physics, and so on. In this paper I shall be concerned rather with horizontal relations, that is to say, with theories of different kinds that deal with objects at the same structural level. Whereas the former, vertical, conception of unity through reduction has come under a good deal of criticism recently (see, e.g., Dupré 1993), horizontal unity has generally been conceded to be an important goal. The most pressing questions about horizontal unification arise in the study of human behavior. Numerous sciences including psychology, economics, anthropology, sociology, and parts of biology, attempt to provide explanations of human behavior. It is possible that some of these sciences may be able to coexist peacefully or even cooperatively. However things do not always go so smoothly, and at least two approaches to human behavior, those deriving from economics and evolutionary biology, often involve clearly imperialist tendencies. Devotees of these approaches are inclined to claim that they are in possession not just of one useful perspective on human behavior, but of the key that will open doors to the understanding of ever wider areas of human behavior. In this paper I shall consider some areas in which economic and evolutionary imperialists are currently staking claims. It is of particular interest to look at situations where two the two imperialist programs are staking the same claim, but limitations of space force me to focus here mainly on economics, and my remarks on evolutionary imperialism will be cursory. As well as some specific insights into the particular strategies of these scientific programs, I hope that my discussion will throw some more general light on the limits of such general theoretical strategies and, thereby, I shall suggest some motivations for adhering to a horizontal pluralism of science that matches the vertical pluralism advocated by anti-reductionists.J. Dupre2002-08-12Z2011-03-11T08:53:51Zhttp://cogprints.org/id/eprint/370This item is in the repository with the URL: http://cogprints.org/id/eprint/3702002-08-12ZCOGNITION FOR SCIENCE? Book Review of Giere on Scientific CognitionIn this review of Giere's Cognitive Models of Science (1992), underlying theoretical assumptions of cognitive models are examined from a psychological and philosophical viewpoint. In particular, the aim of the book to constitute a unified cognitive model for the sciences is addressed. The ambiguity of cognitive processes is discussed as a major problem for cognitive explanations of science theory from a Kantian point of view.W.H. Dittrich2003-05-19Z2011-03-11T08:55:17Zhttp://cogprints.org/id/eprint/2960This item is in the repository with the URL: http://cogprints.org/id/eprint/29602003-05-19ZCognitive Science and PsychologyThe protocol algorithm abstracted from a human cognizer's own narrative in the course of doing a cognitive task is an explanation of the corresponding mental activity in Pylyshyn's (1984) virtual machine model of mind. Strong equivalence between an analytic algorithm and the protocol algorithm is an index of validity of the explanatory model. Cognitive psychologists may not find the index strong equivalence useful as a means to ensure that a theory is not circular because (a) research data are also used as foundation data, (b) there is no justification for the relationship between a tobevalidated theory and its criterion of validity, and (c) foundation data, validation criterion and tobevalidated theory are not independent in cognitive science. There is also the difficulty with not knowing what psychological primitives are.Dr Siu L. Chow1998-04-15Z2011-03-11T08:53:47Zhttp://cogprints.org/id/eprint/281This item is in the repository with the URL: http://cogprints.org/id/eprint/2811998-04-15ZE Pluribus Unum?W&S correctly ask if groups can be like individuals in the harmony and cooperation of their parts, but in their answer, they ignore the importance of the difference between genetically related and unrelated components, and also misconstrue the import of the Hutterites.Daniel Dennett1999-08-11Z2011-03-11T08:54:18Zhttp://cogprints.org/id/eprint/825This item is in the repository with the URL: http://cogprints.org/id/eprint/8251999-08-11ZThe Experimenter's Expectancy Effect: A Meta-experimentThe claim that the outcome of an experiment may be determined by what its experimenter expects to obtain was empirically assessed with a meta-experiment. Three groups of experimenters were asked to conduct Rosenthal & Fode's (1963a) photorating task under two conditions which jointly satisfied the formal requirement of an experiment. The three groups of experimenters were given different information about the expected outcome. There was no evidence of experimenter's expectancy effect when it was properly defined in terms of the difference between the two conditions. Some issues raised by Rosenthal & Fode's (1963a) study, in its capacity as evidence for the experimenter's expectancy effect in particular, are examined. Also discussed are a few metatheoretical issues, as well as some pedagogical implications, of accepting uncritically experimenter's expectancy effect.Siu L. Chow1998-06-06Z2011-03-11T08:54:11Zhttp://cogprints.org/id/eprint/673This item is in the repository with the URL: http://cogprints.org/id/eprint/6731998-06-06ZMind, Society, and the Growth of KnowledgeExplanations of the growth of scientific knowledge can be characterized in terms of logical, cognitive, and social schemas. But cognitive and social schemas are complementary rather than competitive, and purely social explanations of scientific change are as inadequate as purely cognitive explanations. For example, cognitive explanations of the chemical revolution must be supplemented by and combined with social explanations, and social explanations of the rise of the mechanical world view must be supplemented by and combined with cognitive explanations. Rational appraisal of cognitive and social strategies for improving knowledge should appreciate the interdependence of mind and society.P. Thagard1998-03-21Z2011-03-11T08:54:07Zhttp://cogprints.org/id/eprint/616This item is in the repository with the URL: http://cogprints.org/id/eprint/6161998-03-21ZPostscript, 1994One puzzling feature of the response to "Evolution, Error, and Intentionality" has contributed to the direction of my current research on evolution. I was initially dumfounded by the willingness of philosophers simply to dismiss or ignore--as too radical to be taken seriously, apparently--my suggestion that we are survival machines for our genes, as Dawkins has put it. This surprised me, for in point of fact the biology on which I based my philosophical extrapolations is not even controversial. It is uncontested that human bodies, like the bodies of all other creatures, are products of a design process that tracks, in the first instance, the "interests" of the genes whose phenotypic expressions those bodies are. There are substantive controversies about the importance of this fact, but not the fact itself.Daniel C Dennett1998-07-18Z2011-03-11T08:54:13Zhttp://cogprints.org/id/eprint/721This item is in the repository with the URL: http://cogprints.org/id/eprint/7211998-07-18ZSemantics in an intelligent control systemMuch research on intelligent systems has concentrated on low level mechanisms or sub-systems of restricted functionality. We need to understand how to put all the pieces together in an \ul{architecture} for a complete agent with its own mind, driven by its own desires. A mind is a self-modifying control system, with a hierarchy of levels of control, and a different hierarchy of levels of implementation. AI needs to explore alternative control architectures and their implications for human, animal, and artificial minds. Only within the framework of a theory of actual and possible architectures can we solve old problems about the concept of mind and causal roles of desires, beliefs, intentions, etc. The high level ``virtual machine'' architecture is more useful for this than detailed mechanisms. E.g. the difference between connectionist and symbolic implementations is of relatively minor importance. A good theory provides both explanations and a framework for systematically generating concepts of possible states and processes. Lacking this, philosophers cannot provide good analyses of concepts, psychologists and biologists cannot specify what they are trying to explain or explain it, and psychotherapists and educationalists are left groping with ill-understood problems. The paper sketches some requirements for such architectures, and analyses an idea shared between engineers and philosophers: the concept of ``semantic information''.A. Sloman1998-04-15Z2011-03-11T08:53:47Zhttp://cogprints.org/id/eprint/278This item is in the repository with the URL: http://cogprints.org/id/eprint/2781998-04-15ZTiptoeing Past the Covered Wagons: A Response to CarrDavid Carr complains, in "Dennett Explained, or The Wheel Reinvents Dennett," (Report #26), that I have ignored deconstructionism and Phenomenology. This charge is in some regards correct and in others not. Briefly, here is how my own encounters with these fields have looked to me.Daniel C. Dennett1998-06-18Z2011-03-11T08:53:49Zhttp://cogprints.org/id/eprint/327This item is in the repository with the URL: http://cogprints.org/id/eprint/3271998-06-18ZConfirmation and the Computational Paradigm (or: Why Do You Think They Call It Artifical Intelligence?)The idea that human cognitive capacities are explainable by computational modles is often conjoined with the idea that, while the states postulated by such models are in fact realized by brain states, there are no type- type correlations between the states postulated by computational models and brain states (a corollary of token physicalism). I argue that these ideas are not jointly tenable. I discuss the kinds of empirical evidence available to cognitive scientists for (dis)confirming computational models of cognition and argue that none of these kinds of evidence can be relevant to a choice among competing models unless there are in fact type-type correlations between the states postulated by computational models and brain states. Thus, I conclude, research into the computational procedures employed in human cognition must be conducted hand-in-hand with research into the brain processes which realize those procedures.David J. Buller1998-07-02Z2011-03-11T08:53:50Zhttp://cogprints.org/id/eprint/347This item is in the repository with the URL: http://cogprints.org/id/eprint/3471998-07-02ZCould there be a science of Economics?Much scientific thinking and thinking about science involves assumptions that there is a deep and pervasive order to the world that it is the business of science to disclose. A paradigmatic statement of such a view can be found in a widely discussed paper by a prominent economist, Milton Friedman (a paper which will be discussed in more detail shortly): A fundamental hypothesis of science is that appearances are deceptive and that there is a way of looking at or interpreting or organizing the evidence that will reveal superficially disconnected and diverse phenomena to be manifestations of a more fundamental and relatively simple structure. (1953/1984, p.231) On the other hand, the person sometimes described as the father of modern science, Francis Bacon, wrote: The human understanding is of its own nature prone to suppose the existence of more order and regularity in the world than it finds. And though there be many things in nature which are singular and unmatched, yet it devises for them conjugates and relatives which do not exist. (1620/1960, p. 50).J. Dupre1998-04-14Z2011-03-11T08:53:47Zhttp://cogprints.org/id/eprint/275This item is in the repository with the URL: http://cogprints.org/id/eprint/2751998-04-14ZEvolution, Teleology, IntentionalityNo response that was not as long and intricate as the two commentaries combined could do justice to their details, so what follows will satisfy nobody, myself included. I will concentrate on one issue discussed by both commentators: the relationship between evolution and teleological (or intentional) explanation. My response, in its brevity, may have just one virtue: it will confirm some of the hunches (or should I say suspicions) that these and other writers have entertained about my views. For more closely argued defenses of my points, see Dennett 1990a,b,c; 1991a,b.Daniel Dennett1998-04-14Z2011-03-11T08:53:47Zhttp://cogprints.org/id/eprint/274This item is in the repository with the URL: http://cogprints.org/id/eprint/2741998-04-14ZMultiple Drafts: An eternal golden braid?We have learned that the issues we raised are very difficult to think about clearly, and what "works" for one thinker falls flat for another, and leads yet another astray. So it is particularly useful to get these re-expressions of points we have tried to make. Both commentaries help by proposing further details for the Multiple Drafts Model, and asking good questions. They either directly clarify, or force us to clarify, our own account. They also both demonstrate how hard it is for even sympathetic commentators always to avoid the very habits of thought the Multiple Drafts Model was designed to combat. While acknowledging and expanding on their positive contributions, we must sound a few relatively minor alarms.Daniel C. DennettMarcel Kinsbourne1998-03-13Z2011-03-11T08:53:46Zhttp://cogprints.org/id/eprint/243This item is in the repository with the URL: http://cogprints.org/id/eprint/2431998-03-13ZA Reflexive Science of ConsciousnessClassical ways of viewing the relation of consciousness to the brain and physical world make it difficult to see how consciousness can be a subject of scientific study. In contrast to physical events, it seems to be private, subjective, and viewable only from a subject's first-person perspective. But much of psychology does investigate human experience, which suggests that classical ways of viewing these relations must be wrong. An alternative, Reflexive model is outlined along with it's consequences for methodology. Within this model the external phenomenal world is viewed as part-of consciousness, rather than apart-from it. Observed events are only "public" in the sense of "private experience shared." Scientific observations are only "objective" in the sense of "intersubjective." Observed phenomena are only "repeatable" in the sense that they are sufficiently similar to be taken for "tokens" of the same event "type." This closes the gap between physical and psychological phenomena. Indeed, events out-there in the world can often be regarded as either physical or psychological depending on the network of relationships under consideration. Max Velmans1998-06-07Z2011-03-11T08:54:11Zhttp://cogprints.org/id/eprint/676This item is in the repository with the URL: http://cogprints.org/id/eprint/6761998-06-07ZSocieties of minds: Science as Distributed ComputingScience is studied in very different ways by historians, philosophers, psychologists, and sociologists. Not only do researchers from different fields apply markedly different methods, they also tend to focus on apparently disparate aspects of science. At the farthest extremes, we find on one side some philosophers attempting logical analyses of scientific knowledge, and on the other some sociologists maintaining that all knowledge is socially constructed. This paper is an attempt to view history, philosophy, psychology, and sociology of science from a unified perspective.P. Thagard1999-09-05Z2011-03-11T08:53:53Zhttp://cogprints.org/id/eprint/393This item is in the repository with the URL: http://cogprints.org/id/eprint/3931999-09-05ZAcceptance of a Theory: Justification or Rhetoric?The rhetoric-analytic critique of experimental psychology owes its apparent attractiveness to (a) some erroneous ideas about cognitive psychology and the rationale of experimentation, (b) the failure to distinguish between prior data and evidential data vis-à-vis the to-be-corroborated explanatory theory, and (c) evidential data owes their identity to a theory that is independent of the theory being tested. Theories in cognitive psychology are accepted because they can withstand concerted efforts to falsify them.Siu L. Chow1998-09-06Z2011-03-11T08:54:15Zhttp://cogprints.org/id/eprint/738This item is in the repository with the URL: http://cogprints.org/id/eprint/7381998-09-06ZBook review of _The Egalitarians -- Human and Chimpanzee_ by Margaret PowerThis book combines some very interesting ideas with stunningly poor scholarship to create a potentially missleading book. Because the basic thesis -- that episodic extreme aggression seen among chimpanzees at Gombe and Mahale has been artificially induced by provisioning -- has been widely considered and parallels other criticisms of nonhuman primate data (e.g. debates over the 'naturalness' of langur infanticide), there is a risk people unfamiliar with the chimpanzee data will accept her conclusions uncritically. At the same time, her attempt to integrate developmental psychology with socioecology in humans and apes is interesting and it'd be a shame to dismiss that approach simply because of the poor application. Secondarily, the book should be of interest to historians of science because it maps so clearly onto the tradition of contrasting Rousseauian and Hobbesian views of (human) nature.Jim Moore1998-04-14Z2011-03-11T08:53:47Zhttp://cogprints.org/id/eprint/269This item is in the repository with the URL: http://cogprints.org/id/eprint/2691998-04-14ZHitting the Nail on the HeadThis is a valuable antidote to several different ill-examined preconceptions, but I don't think it has quite succeeded in unmasking and neutralizing the bogey that motivates them all. I shall attempt to do this be reinforcing, with minor caveats, some of the authors' main points. In defense of their 'enactive' account, the authors occasionally protest too much. For instance, the trouble with (external) objectivism is not that it makes the mistake of holding the external environment constant, setting a problem for the organism. Following Levins and Lewontin, they insist on the role of the organism in creating its visual environment, but this is a process that occurs almost entirely on an evolutionary time scale. It is true, as Lewontin has often pointed out, that the chemical composition of the atmosphere, for instance, is as much a product of the activity of living organisms as a precondition of their life, but it is also true that it can be safely treated as a constant, since its changes in response to local organismic activity are usually insignificant as variables in interaction with the variables under scrutiny. The same is true of the colors of objects: they have indeed co-evolved with the color-vision systems of the organisms, but, except on an evolutionary time scale, they are in the main imperturbable by organisms' perceptual activity.Daniel Dennett1998-04-14Z2011-03-11T08:53:47Zhttp://cogprints.org/id/eprint/265This item is in the repository with the URL: http://cogprints.org/id/eprint/2651998-04-14Z"Temporal anomalies of consciousness: implications of the uncentered brain"As cognitive science, including especially cognitive neuroscience, closes in on the first realistic models of the human mind, philosophical puzzles and problems that have been conveniently postponed or ignored for generations are beginning to haunt the efforts of the scientists, confounding their vision and leading them down hopeless paths of theory. I will illustrate this claim with a brief look at several temporal phenomena which appear anomalous only because of a cognitive illusion: an illusion about the point of view of the observerix. Since there is no point in the brain where "it all comes together," several compelling oversimplifications of traditional theorizing must be abandoned.Daniel C Dennett1998-04-05Z2011-03-11T08:53:46Zhttp://cogprints.org/id/eprint/262This item is in the repository with the URL: http://cogprints.org/id/eprint/2621998-04-05ZThe Brain and its BoundariesThese are heady times for the sciences of the mind. The pace of discovery is quickening, thanks to the mountain of data provided by the new brain-imaging technologies, but thanks even more to the computer simulations that have expanded and disciplined our imaginations, dramatically enlarging the logical space of models that can be investigated. We can now seriously consider hypotheses that a few years ago were simply unframable--"inconceivable", a philosopher might have been tempted to say. These computer-expanded powers are being vigorously exploited by a new generation of theorists and experimentalists. In some quarters the first symptoms of gold rush fever have been detected.Daniel C Dennett1998-04-05Z2011-03-11T08:53:46Zhttp://cogprints.org/id/eprint/260This item is in the repository with the URL: http://cogprints.org/id/eprint/2601998-04-05ZGranny's Campaign for Safe ScienceWhat is the thread tying together all of Jerry Fodor's vigorous and influential campaigns over the years? Consider the diversity of his btes noirs. In Chihara and Fodor, 1965, it was Wittgenstein and the "no private language" gang; in Psychological Explanation (1968) and The Language of Thought (1975), it was Ryle, Skinner and other behaviorists; in "Tom Swift and his Procedural Grandmother" (1978 reprinted in 1981) it was AI in general and procedural semantics in particular; in "Three Cheers for Propositional Attitudes" (1979, reprinted with revisions in 1981) it was me and my "irrealist" way with stances; in "Methodological Solipsism Considered as a Research Strategy in Cognitive Science" (1980 reprinted in 1981) it was the brand of "naturalism" that claimed that psychology had to traffic in meanings that were not inside the head; in The Modularity of Mind (1983) it was Bruner and the other New Look psychologists who infected perception with thought, but also, in the shocking punch line of the last chapter, AI again; in Psychosemantics (1987) it was the meaning holists and those who would ground their naturalistic appeal to teleological formulations in what Fodor elsewhere has called "vulgar Darwinism" (these villains take another drubbing in his forthcoming "A Theory of Content"); and in "Connectionism and Cognitive Architecure: a Critical Analysis", Fodor and Pylyshyn, 1988, it is the connectionists and their many friends.Daniel C Dennett2001-06-18Z2011-03-11T08:54:41Zhttp://cogprints.org/id/eprint/1580This item is in the repository with the URL: http://cogprints.org/id/eprint/15802001-06-18ZPost-Gutenberg Galaxy: The Fourth Revolution in the Means of Production of KnowledgeThe 4th revolution after speech, writing and print, is skywriting (email, hypermail, web-based archiving).Stevan Harnad1998-04-05Z2011-03-11T08:53:46Zhttp://cogprints.org/id/eprint/261This item is in the repository with the URL: http://cogprints.org/id/eprint/2611998-04-05ZTwo Contrasts: Folk Craft versus Folk Science and Belief versus OpinionLet us begin with what all of us here agree on: folk psychology is not immune to revision. It has a certain vulnerability in principle. Any particular part of it might be overthrown and replaced by some other doctrine. Yet we disagree about how likely it is that that vulnerability in principle will turn into the actual demise of large portions--or all--of folk psychology. I am of the view that folk psychology is here for the long haul, and for some very good reasons. But I am not going to concentrate on that in my remarks. What nobody has bothered saying here yet, but is probably worth saying, is that for all of its blemishes, warts and perplexities, folk psychology is an extraordinarily powerful source of prediction. It is not just prodigiously powerful but remarkably easy for human beings to use. We are virtuoso exploiters of not so much a theory as a craft. That is, we might better call it a folk craft rather than a folk theory. The theory of folk psychology is the ideology about the craft, and there is lots of room, as anthropologists will remind us, for false ideology.Daniel C Dennett1998-12-14Z2011-03-11T08:53:51Zhttp://cogprints.org/id/eprint/371This item is in the repository with the URL: http://cogprints.org/id/eprint/3711998-12-14ZWhat's wrong with Psychology, anyway?This chapter considers various factors that have been responsible for the comparatively slow development of psychology into a cumulative empirical science. Special attention is devoted to correctable methodological mistakes, the over-reliance upon significance testing (and the fact that, in psychology, the null hypothesis is almost always false), and an analysis of the concept of replication.David T. Lykken1998-04-05Z2011-03-11T08:53:46Zhttp://cogprints.org/id/eprint/258This item is in the repository with the URL: http://cogprints.org/id/eprint/2581998-04-05ZMemes and the Exploitation of ImaginationThe general issue to be addressed in a Mandel Lecture is how (or whether) art promotes human evolution or development. I shall understand the term "art" in its broadest connotations--perhaps broader than the American Society for Aesthetics would normally recognize: I shall understand art to include all artifice, all human invention. What I shall say will a fortiori include art in the narrower sense, but I don't intend to draw particular attention to the way my thesis applies to it.Daniel C Dennett2001-06-18Z2011-03-11T08:54:41Zhttp://cogprints.org/id/eprint/1581This item is in the repository with the URL: http://cogprints.org/id/eprint/15812001-06-18ZScholarly Skywriting and the Prepublication Continuum of Scientific Inquiry [reprinted in Current Contents 45: 9-13, November 11 1991]Scientific publication is a continuum, from unrefereed preprints to refereed reprints, to revisions, commentaries, and replies. All this is optimally done electronically, as "Scholarly Skywriting."Stevan Harnad2002-11-12Z2011-03-11T08:55:06Zhttp://cogprints.org/id/eprint/2597This item is in the repository with the URL: http://cogprints.org/id/eprint/25972002-11-12ZA theoretical account of translation - without a translation theoryIn this paper I argue that the phenomenon commonly referred to as "translation" can be accounted for naturally within the relevance theory of communication developed by Sperber and Wilson (1986a): there is no need for a distinct general theory of translation. Most kinds of translation can be analysed as varieties of interpretive use. I distinguish direct from indirect translation. Direct translation corresponds to the idea that translation should convey the same meaning as the original. It requires the receptors to familiarise themselves with the context envisaged for the original text. The idea that the meaning of the original can be communicated to any receptor audience, no matter how different their background, is shown to be a misconception based on mistaken assumptions about communication. Indirect translation involves looser degrees of resemblance. I show that direct translation is merely a special case of interpretive use, whereas indirect translation is the general case. In all cases the success of the translation depends on how well it meets the basic criterion for all human communication, which is consistency with the principle of relevance. Thus the different varieties of translation can be accounted for without recourse to typologies of texts, translations, functions or the like.Dr Ernst-August Gutt2006-08-01Z2011-03-11T08:56:32Zhttp://cogprints.org/id/eprint/5019This item is in the repository with the URL: http://cogprints.org/id/eprint/50192006-08-01ZExperience and Theory as Determinants of Attitudes toward Mental Representation: The Case of Knight Dunlap and the Vanishing Images of J.B. WatsonGalton and subsequent investigators find wide divergences in people's subjective reports of mental imagery. Such individual differences might be taken to explain the peculiarly irreconcilable disputes over the nature and cognitive significance of imagery which have periodically broken out among psychologists and philosophers. However, to so explain these disputes is itself to take a substantive and questionable position on the cognitive role of imagery. This article distinguishes three separable issues over which people can be "for" or "against" mental images. Conflation of these issues can lead to theoretical differences being mistaken for experiential differences, even by theorists themselves. This is applied to the case of John B. Watson, who inaugurated a half-century of neglect of image psychology. Watson originally claimed to have vivid imagery; by 1913 he was denying the existence of images. This strange reversal, which made his behaviorism possible, is explicable as a "creative misconstrual" of Dunlap's "motor" theory of imagination.Nigel J. T. Thomas2004-08-10Z2011-03-11T08:55:39Zhttp://cogprints.org/id/eprint/3750This item is in the repository with the URL: http://cogprints.org/id/eprint/37502004-08-10ZExperience and Theory as Determinants of Attitudes toward Mental Representation: The Case of Knight Dunlap and the Vanishing Images of J.B. WatsonGalton and subsequent investigators find wide divergences in people's subjective reports of mental imagery. Such individual differences might be taken to explain the peculiarly irreconcilable disputes over the nature and cognitive significance of imagery which have periodically broken out among psychologists and philosophers. However, to so explain these disputes is itself to take a substantive and questionable position on the cognitive role of imagery. This article distinguishes three separable issues over which people can be "for" or "against" mental images. Conflation of these issues can lead to theoretical differences being mistaken for experiential differences, even by theorists themselves. This is applied to the case of John B. Watson, who inaugurated a half-century of neglect of image psychology. Watson originally claimed to have vivid imagery; by 1913 he was denying the existence of images. This strange reversal, which made his behaviorism possible, is explicable as a "creative misconstrual" of Dunlap's "motor" theory of imagination.Nigel J. T. Thomas1998-04-05Z2011-03-11T08:53:46Zhttp://cogprints.org/id/eprint/257This item is in the repository with the URL: http://cogprints.org/id/eprint/2571998-04-05ZThe Origins of SelvesWhat is a self? Since Descartes in the 17th Century we have had a vision of the self as a sort of immaterial ghost that owns and controls a body the way you own and control your car.Daniel C Dennett1999-08-06Z2011-03-11T08:54:18Zhttp://cogprints.org/id/eprint/823This item is in the repository with the URL: http://cogprints.org/id/eprint/8231999-08-06ZSignificance Tests and Deduction: Reply to Folger (1989)Shows that agreeing with Folger's (1989) methodological observations does not mean that it is incorrect to use significance tests. This contention is based on the dynamics of theory corroboration, with reference to which the following distinctions are illustrated, namely, the distinctions between (a) statistical hypothesis testing, theory corroboration, and syllogistic argument, (b) a responsible experimenter and a cynical experimenter, (c) logical validity and methodological correctness, and (d) warranted assertability and truth.Siu L. Chow1999-08-07Z2011-03-11T08:54:18Zhttp://cogprints.org/id/eprint/824This item is in the repository with the URL: http://cogprints.org/id/eprint/8241999-08-07ZSignificance Test or Effect Size?I describe and question the argument that in psychological research, the significance test should be replaced (or, at least, supplemented) by a more informative index (viz., effect size or statistical power) in the case of theory-corroboration experimentation because it has been made on the basis of some debatable assumptions about the rationale of scientific investigation. The rationale of theory-corroboration experimentation requires nothing more than a binary decision about the relation between two variables. This binary decision supplies the minor premise for the syllogism implicated when a theory is being tested. Some metatheoretical considerations reveal that the magnitude of the effect-size estimate is not a satisfactory alternative to the significance test.Siu L. Chow1999-07-26Z2011-03-11T08:54:18Zhttp://cogprints.org/id/eprint/816This item is in the repository with the URL: http://cogprints.org/id/eprint/8161999-07-26ZScience, Ecological Validity and ExperimentationSome important meta-theoretical insights about experimental psychology are integrated into the "conjectures and refutations" framework in order to reinforce a realist's view of scientific methodology. Some issues which may be difficult for the realist's position are discussed. It is argued that there is no need for the evidential observation to mimic the phenomenon of interest; such a mimicry may even be counter-productive. A case is also made that questions about ecological validity are not relevant to the rationale of experimentation.Siu L. Chow1998-03-27Z2011-03-11T08:53:46Zhttp://cogprints.org/id/eprint/250This item is in the repository with the URL: http://cogprints.org/id/eprint/2501998-03-27ZEliminate the Middletoad!Philosophical controversy about the mind has flourished in the thin air of our ignorance about the brain. The humble toad, it now seems, may provide our first instance of a creature whose whole brain is within the reach of our scientific understanding. What will happen to the traditional philosophical issues as our theoretical and factual ignorance recedes? Discussion of the issues explored in the target article is, as Ewert says, "often too theoretical, sometimes philosophical and even [as if that weren't bad enough?--DCD] emotion-laden." The research reported by Ewert has interesting philosophical implications, as he probably recognizes, but he wisely leaves the philosophy to the philosophers. Being one, I would like to draw some of the conclusions he eschews.Daniel C. Dennett2011-12-16T00:58:34Z2011-12-16T00:58:34Zhttp://cogprints.org/id/eprint/7755This item is in the repository with the URL: http://cogprints.org/id/eprint/77552011-12-16T00:58:34ZNatural Transformations of Organismic StructuresThe mathematical structures underlying the theories of organismic sets, (M, R)-systems and molecular sets are shown to be transformed naturally within the theory of categories and functors. Their natural transformations allow the comparison of distinct entities, as well as the modelling of dynamics in “organismic” structures.Prof. Dr. I. C. Baianuibaianu@illinois.edu1998-07-24Z2011-03-11T08:53:51Zhttp://cogprints.org/id/eprint/356This item is in the repository with the URL: http://cogprints.org/id/eprint/3561998-07-24ZMinds, Machines and GoedelGoedel's theorem states that in any consistent system which is strong enough to produce simple arithmetic there are formulae which cannot be proved-in-the-system, but which we can see to be true. Essentially, we consider the formula which says, in effect, "This formula is unprovable-in-the-system". If this formula were provable-in-the-system, we should have a contradiction: for if it were provablein-the-system, then it would not be unprovable-in-the-system, so that "This formula is unprovable-in-the-system" would be false: equally, if it were provable-in-the-system, then it would not be false, but would be true, since in any consistent system nothing false can be provedin-the-system, but only truths. So the formula "This formula is unprovable-in-the-system" is not provable-in-the-system, but unprovablein-the-system. Further, if the formula "This formula is unprovablein- the-system" is unprovable-in-the-system, then it is true that that formula is unprovable-in-the-system, that is, "This formula is unprovable-in-the-system" is true. Goedel's theorem must apply to cybernetical machines, because it is of the essence of being a machine, that it should be a concrete instantiation of a formal system. It follows that given any machine which is consistent and capable of doing simple arithmetic, there is a formula which it is incapable of producing as being true---i.e., the formula is unprovable-in-the-system-but which we can see to be true. It follows that no machine can be a complete or adequate model of the mind, that minds are essentially different from machines.J.R. Lucas2013-09-17T14:29:34Z2013-09-17T14:29:34Zhttp://cogprints.org/id/eprint/9031This item is in the repository with the URL: http://cogprints.org/id/eprint/90312013-09-17T14:29:34ZProgramma TFA Informatica di Base AA 2012-31. Che cos’è il software? La natura del software è un aspetto dell’informatica che viene raramente analizzato in tutti i suoi aspetti. Il software infatti ha almeno due livelli, il codice sorgente (leggibile dall’uomo) e il codice binario (leggibile dalla macchina). Ma questo primo livello di analisi lascia aperte una serie di domande importanti: quanto è importante l’implementazione? Perché usiamo diversi linguaggi di programmazione se in teoria sono tutti Turing-equivalenti? Che differenza c’è tra istruzioni, esecuzione, e dati? Vedremo il caso particolare della meta-
programmazione, e il ruolo del programmatore come (meta)autore del software.
2. Modelli di produzione del software. Attorno al software c’è tutto un ecosistema, formato da diverse figure, professionali e non solo: il designer, lo sviluppatore, il
committente, l’utente finale, ecc. A seconda della licenza scelta (proprietaria, a sorgente aperto, software libero) si configurano diversi modelli di produzione, con risvolti diversi anche da un punto di vista economico. Vedremo il modello di produzione industriale del software proprietario di tipo tayloristico, il modello Toyota (dall’eXtreme
Programming), il modello a bazar di Raymond, la strategia della doppia licenza, la legge della coda lunga di Anderson e altri modelli noti in letteratura.
3. Analisi di casi etici in informatica La pervasività dell’informatica nella società comporta una serie di dilemmi etici di difficile soluzione, che possono essere analizzati tramite il metodo dell’analisi dei casi etici. Dopo aver spiegato il metodo in tutti i suoi passaggi, verranno proposti alcuni casi etici noti in letteratura, quali: chi è responsabile del drone che in guerra uccide erroneamente un civile? È giusto mettere videocamere di sorveglianza ovunque o potrebbe essere usato per fini poco leciti e quindi andrebbe limitato?Dr Federico Gobbofederico.gobbo@univaq.it