CSCC'99 PAPER ABSTRACT

"From Synapses To Rules: The Self-Referential Perspective"

B. Apolloni, G. Biella (Italy) and A. Stafylopatis (Greece)

Abstract: We consider the extraction of formal knowledge from a trained neural network in the perspective of identifying this network within our brain and the final user of this information with our brain again. We first analyze theoretical issues mainly coming from AI, but also from neurophysiology and information theory on relations and links between subsymbolic and symbolic knowledge in our brain. From this analysis a bipartition derives of the considered algorithms. From one side, there are direct methods for discovering Horn clauses and extensions from trained networks, a usual subject in many review papers. From the other side, we will identify symbolic knowledge with tools for efficiently managing concepts discovered in subsymbolic way in a self-referential framework where a neural network is however the user of the concepts. At first glance, this alternative perspective would just reconsider the direct methods in respect to the functionalities of the hidden_to_output nodes connections. But exactly after self referentiality, discovering formal connection should require a heavy training of the involved neural network, namely: a training capable of simulating the architectural and parametric refinement achieved by our brain along millennia. This calls for algorithms for symbolical learning that comply with neurophysiological functionality constraints, but shorten the mentioned long training phase, by using facilities now available to our brain such as a preexisting formal knowledge and the capabilty of generating suitable examples by ourself. By definition, the output of these algorithms is exactly the goal knowledge springing from neural networks we are searching for.

Key-Words: Neural networks, rule extraction, symbolic/subsymbolic knowledge.
CSCC'99 Proc.pp.5301-5306