--- abstract: "Neural networks that learn the What and Where task perform better if they possess a modular architecture for separately processing the identity and spatial location of objects. In previous simulations the modular architecture either was hardwired or it developed during an individual's life based on a preference for short connections given a set of hardwired unit locations. We present two sets of simulations in which the network architecture is genetically inherited and it evolves in a population of neural networks in two different conditions: (1) both the architecture and the connection weights evolve; (2) the network architecture is inherited and it evolves but the connection weights are learned during life. The best results are obtained in condition (2). Condition (1) gives unsatisfactory results because (a) adapted sets of weights can suddenly become maladaptive if the architecture changes, (b) evolution fails to properly assign computational resources (hidden units) to the two tasks, (c) genetic linkage between sets of weights for different modules can result in a favourable mutation in one set of weights being accompanied by an unfavourable mutation in another set of weights.\n\n\n\n" altloc: - http://gral.ip.rm.cnr.it/rcalabretta/evolmod.pdf chapter: ~ commentary: ~ commref: ~ confdates: 'September 16-18, 2000' conference: 'Sixth Neural Computation and Psychology Workshop Evolution, Learning, and Development' confloc: Liège contact_email: ~ creators_id: [] creators_name: - family: Di Ferdinando given: Andrea honourific: '' lineage: '' - family: Calabretta given: Raffaele honourific: '' lineage: '' - family: Parisi given: Domenico honourific: '' lineage: '' date: 2001 date_type: published datestamp: 2001-02-10 department: ~ dir: disk0/00/00/12/98 edit_lock_since: ~ edit_lock_until: ~ edit_lock_user: ~ editors_id: [] editors_name: - family: French given: Robert honourific: '' lineage: '' - family: Sougné given: Jacques honourific: '' lineage: '' eprint_status: archive eprintid: 1298 fileinfo: /style/images/fileicons/application_pdf.png;/1298/3/evolmod.pdf full_text_status: public importid: ~ institution: ~ isbn: ~ ispublished: inpress issn: ~ item_issues_comment: [] item_issues_count: 0 item_issues_description: [] item_issues_id: [] item_issues_reported_by: [] item_issues_resolved_by: [] item_issues_status: [] item_issues_timestamp: [] item_issues_type: [] keywords: 'evolution of modularity, neural networks, genetic algorithms, what and where system' lastmod: 2011-03-11 08:54:30 latitude: ~ longitude: ~ metadata_visibility: show note: ~ number: ~ pagerange: ~ pubdom: FALSE publication: ~ publisher: Springer Verlag refereed: TRUE referencetext: | 1. Belew, R. K., McInerney, J., & Schraudolph, N. (1991). Evolving networks: using the genetic algorithm with connectionist learning. In C. G. Langton, C. Taylor, J. D. Farmer, & S. Rasmussen (eds), Artificial Life II. Addison-Wesley, Reading, MA. 2. Belew, R. K. & Mitchell, M. (1996). Adaptive Individuals in Evolving Populations. Addison-Wesley, Reading, MA. 3. Calabretta, R., Nolfi, S., Parisi, D. & Wagner, G. P. (2000). Duplication of modules facilitates the evolution of functional specialization. Artificial Life 6:69-84. 4. Cangelosi A., Parisi D. & Nolfi S. (1994). Cell division and migration in a 'genotype' for neural networks. Network 5:497-515. 5. Elman, J. L., Bates, E. A., Johnson, M. H., Karmiloff-Smith, A., Parisi, D. & Plunkett, K. (1996). Rethinking innateness. A connectionist perspective on development. The MIT Press, Cambridge, MA. 6. Floreano, D. & Urzelai, J. (2000). Evolutionary robots with on-line self-organization and behavioral fitness. Neural Networks 13:431-443. 7. Jacobs, R. A., Jordan, M. I. & Barto, A. G. (1991). Task decomposition through competition in a modular connectionist architecture: The what and where vision tasks. Cognitive Science 15:219-250. 8. Jacobs, R. A. & Jordan, M. I. (1992). Computational consequences of a bias toward short connections. Journal of Cognitive Neuroscience 4:323-335. 9. Kolen J. F. & Pollack, J. B. (1990). Back-propagation is sensitive to initial conditions. Complex Systems 4:269-280. 10. Murre, J. M. J. (1992). Learning and categorization in modular neural networks. Harvester, New York, NY. 11. Plaut D. C. & Hinton, G. E. (1987). Learning sets of filters using back-propagation. Computer Speech and Language 2:35-61. 12. Reed, R. D. & Marks II, R. J. (1999). Neural Smithing. Supervised Learning in Feedforward Artificial Neural Networks. The MIT Press, Cambridge, MA. 13. Rueckl, J. G., Cave, K. R. & Kosslyn, S. M. (1989). Why are “what” and “where” processed by separate cortical visual systems? A computational investigation. Journal of Cognitive Neuroscience 1:171-186. 14. Ungerleider, L. G. & Mishkin, M. (1982). Two cortical visual systems. In D. J. Ingle, M. A. Goodale & R. J. W. Mansfield (Eds.), The Analysis of Visual Behavior. The MIT Press, Cambridge, MA. relation_type: [] relation_uri: [] reportno: ~ rev_number: 12 series: ~ source: ~ status_changed: 2007-09-12 16:37:14 subjects: - bio-evo - cog-psy - comp-neuro-sci - comp-sci-neural-nets succeeds: ~ suggestions: ~ sword_depositor: ~ sword_slug: ~ thesistype: ~ title: Evolving modular architectures for neural networks type: confpaper userid: 1394 volume: ~