Index entries on this page f oo 1 Brain, Gene, a 60. Brain, Gene, and Quantum Inspired Computational Intelligence Pr Nikola Kasabov 60.2.1 Local, Knowledge-Based Learning This chapter discusses opportunities and challenges Evolving Connectionist Systems – for the creation of methods of computational intelligence (CI) and more specifically – ar- ted Weakly Brain Inspired Models ...... 60.2.2 Spiking Neural Networks – 3 tificial neural networks (ANN), inspired by Strongly Brain Inspired Models .... 4 principles at different levels of information 60.2.3 Open Questions .......................... 6 processing in the brain: cognitive, neuronal, genetic, and quantum, and mainly, the issues 60.3 Gene Inspired Methods related to the integration of these principles into of Computational Intelligence ............... 6 more powerful and accurate CI methods. It is 60.3.1 The Central Dogma demonstrated how some of these methods can in Molecular Biology and GRN ...... 6 Part A 60 be applied to model biological processes and to 60.3.2 GRN-ANN Models........................ 7 c improve our understanding in the subject area; generic CI methods being applicable to challeng- 60.4 Computational Neurogenetic Models ...... 8 ing generic AI problems. The chapter first offers a 60.4.1 General Notions ......................... 8 rre brief presentation of some principles of informa- 60.4.2 A Computational Neurogenetic Model that Integrates GRN Within tion processing at different levels of the brain and an SNN Model ............................ 8 then presents brain inspired, gene inspired, and 60.4.3 Open Questions .......................... 10 quantum inspired CI. The main contribution of the chapter, however, is the introduction of meth- 60.5 Quantum Inspired CI ............................. 11 ods inspired by the integration of principles from 60.5.1 Quantum Level of Information several levels of information processing, namely: Processing ................................. 11 1. A computational neurogenetic model that in 60.5.2 Why Quantum Inspired CI?........... 11 co 60.5.3 Quantum Inspired Evolutionary one model combines gene information related Computation and Connectionist to spiking neuronal activities. Models...................................... 12 2. A general framework of a quantum spiking SPIN: 12742608 (Springer Handbook of Bio-/Neuroinformatics) neural network (SNN) model. 60.6 Towards the Integration of Brain, 3. A general framework of a quantum computa- Gene, and Quantum Information tional neurogenetic model (CNGM). Processing Principles: A Conceptual Un Many open questions and challenges are Framework for Future Research ............. 12 60.6.1 Quantum Inspired SNN ................ 12 discussed, along with directions for further 60.6.2 A Conceptual Framework research. of a QI-CNGM ............................. 13 60.6.3 Open Questions .......................... 13 Created on: 17 April 2013 10:50 CET 60.1 Introduction ........................................ 2 60.7 Conclusions and Directions for Further Research ............................. 14 60.2 CI and ANN Models Inspired by Neuronal and Cognitive Processes in the Brain ...... 3 References .................................................. 14 MS ID: hb23-060 Proof 1 Index entries on this page f oo 2 Part A Understanding Information Processes in Biological Systems 60.1 Introduction The TS0 TS1 brain is a dynamic information processing At the level of the whole brain, cognitive pro- Pr system that evolves its structure and functionality in cesses take place, such as language and reasoning, and time through information processing at different levels – global information processes are manifested, such as Fig. 60.1: quantum, molecular (genetic), single neuron, consciousness. ensemble of neurons, cognitive, evolutionary. At the level of a population of individuals, species Principles from each of these levels have been al- evolve through evolution, changing the genetic DNA ready used as inspiration for CI methods, and more code for a better adaptation. specifically – for methods of ANN. The chapter focuses The information processes at each level shown in on the interaction between these levels and mainly on Fig. 60.1 are very complex and difficult to understand, ted how this interaction can be modeled and how it can be used in principle to improve existing CI methods and but much more difficult to understand is the interaction between the different levels. It may be that understand- for a better understanding of brain, gene, and quantum ing the interaction through its modeling would be a key processes. to understanding each level of information processing in At the quantum level, particles (atoms, ions, elec- the brain and perhaps the brain as a whole. Using prin- trons, etc.), which make every molecule in the material ciples from different levels in one ANN CI model and world, move continuously, being in several states at the modeling their relationship can lead to a next genera- same time, and are characterized by probability, phase, tion of ANN as more powerful tools to understand the frequency, and energy. brain and to solve complex problems. Part A 60.1 c At a molecular level, RNA and protein molecules Some examples of CI models that combine prin- evolve in a cell and interact in a continuous way, based ciples from different levels shown in Fig. 60.1 are: on the information stored in the DNA and on external computational neurogenetic models [60.1–3], quantum rre factors, and affect the functioning of a cell (neuron) inspired CI and ANN [60.4, 5], and evolutionary mod- under certain conditions. els [60.6, 7]. Suggestions are made that modeling of At the level of a neuron, the internal information higher cognitive functions and consciousness in par- processes and the external stimuli cause the neuron to ticular can be achieved if principles from quantum produce a signal that carries information to be trans- information processing are considered [60.8, 9]. There ferred to other neurons. are many issues and open questions to be addressed At the level of neuronal ensembles, all neurons op- when creating CI methods that integrate principles from erate in a concert, defining the function of the ensemble, different levels; some of these are presented in this for instance the perception of a spoken word. chapter. co In Sect. 60.2 models inspired by information pro- cesses in the brain, which include local learning 6. Evolutionary (population/generation) processes evolving connectionist systems (ECOS) and SNN are SPIN: 12742608 (Springer Handbook of Bio-/Neuroinformatics) 5. Brain cognitive processes discussed briefly. Section 60.3 presents CI methods in- spired by genetic information processes, mainly models 4. System information processing (e.g., neural ensemble) of gene regulatory networks (GRN). In Sect. 60.4, the issue of combining neuronal with genetic information Un 3. Information processing in a cell (neuron) processing is discussed and the principles of CNGM are presented. Section 60.5 presents some ideas behind 2. Molecular information processing (genes, proteins) quantum inspired CI. Section 60.6 presents a model of a quantum inspired SNN and offers a theoretical Created on: 17 April 2013 10:50 CET 1. Quantum information processing framework for the integration of principles from quan- tum, -genetic, and neuronal information processing. Fig. 60.1 Levels of information processing in the brain and the Section 60.7 concludes the chapter with more open interaction between the levels questions and challenges for the future. MS ID: hb23-060 Proof 1 TS 0 Please provide the index entry. TS 1 The name “Introduction” is not allowed for the first section. Please supply a different name. Editor’s or typesetter’s annotations (will be removed before the final TeX run) Index entries on this page f oo Brain, Gene, and Quantum Inspired Computational Intelligence CI and ANN Models Inspired by Neuronal and . . . TS2 3 60.2 CI and ANN Models Inspired by Neuronal and Cognitive Processes in the Brain Pr Many CI methods, in particular ANN, are brain inspired 2. They learn via incremental learning, possibly in an (using some principles from the brain), or brain-like on-line mode. (more biologically plausible models, usually developed 3. They learn continuously in a lifelong learning mode. to model a brain function) [60.1, 10–15]. Examples are: 4. They learn both as individual systems and as an models of single neurons and neural network ensem- evolutionary population of such systems. bles [60.16–22], cognitive ANN models [60.14, 15, 23, 5. They use constructive learning and have evolving 24], etc. structures. These models have been created with the goals 6. They learn and partition the problem space locally, of: ted thus allowing for a fast adaptation and tracing the evolving processes over time. • Modeling and understanding brain functions. 7. They evolve different types of knowledge rep- • Creating powerful methods and systems of CI for resentation from data, mostly a combination of solving complex problems in all areas of science memory-based and symbolic knowledge. and the humanity. In this section we present only two groups of models, Many ECOS have been suggested so far, where namely ECOS and SNN ,as they will be used in other the structure and the functionality of the models sections to create models that incorporate principles evolve through incremental, continuous learning from Part A 60.2 c from other levels of information processing. incoming data, sometimes in an on-line mode, and through interaction with other models and the en- 60.2.1 Local, Knowledge-Based Learning vironment. Examples are: growing SOMs [60.17], rre Evolving Connectionist Systems – growing gas [60.26], RAN [60.27], growing RBF net- Weakly Brain Inspired Models works [60.28, 29], FuzzyARTMAP [60.14], EFuNN [60.25, 30, 31], DENFIS [60.32], and many more. ECOS are adaptive, incremental learning and knowl- A block diagram of EFuNN is given in Fig. 60.2. It edge representation systems that evolve their structure is used to model GRN in Sect. 60.5. At any time of the and functionality, where there is a connectionist archi- EFuNN continuous incremental learning, rules can be tecture in the core of a system that consists of neurons derived from the structure, which rules represent clus- (information processing units) and connections between ters of data and local functions associated with these them [60.25]. ECOS is a CI system based on neural clusters co networks, but using other techniques of CI, that oper- IF < data is in cluster Nc j , ates continuously in time and adapts its structure and functionality through continuous interaction with the defined by a cluster center N j , SPIN: 12742608 (Springer Handbook of Bio-/Neuroinformatics) environment and with other systems. The adaptation is a cluster radius R j defined through: and a number of examples 1. A set of evolving rules. Njexamp in this cluster > Un 2. A set of parameters (genes) that are subject to THEN < the output function is Fc > (60.1) change during the system operation. 3. An incoming continuous flow of information, possi- In the case of DENFIS, first-order local fuzzy rule bly with unknown distribution. models are derived incrementally from data, for exam- 4. Goal (rationale) criteria (also subject to modifica- ple, Created on: 17 April 2013 10:50 CET tion) that are applied to optimize the performance IF < the value of × 1 is in the area defined by of the system over time. a Gaussian membership function with a center ECOS learning algorithms are inspired by brain-like at 0.1 and a standard deviation of 0.05 > TS3 , MS ID: hb23-060 Proof 1 information processing principles, e.g., AND < the value of × 2 is in the area defined 1. They evolve in an open space, where the dimensions by a Gaussian function with parameters of the space can change. (0.25, 0.1) respectively > TS 2 Please provide a shorter running title. TS 3 > added. Please confirm. Editor’s or typesetter’s annotations (will be removed before the final TeX run) Index entries on this page f oo 4 Part A Understanding Information Processes in Biological Systems Fig. 60.2 An EFuNN architecture Outputs with a short term memory and feed- back connections [60.33]. It is used Pr W4 in Sect. 60.5 to model GRN with in- puts being the expression of genes at Fuzzy outputs a time (t) and the outputs being the expression of genes/proteins at time inda(t) A(t) W2 1 1 (t + dt) Rule (case) layer A(t–1) 1 W1 inda(t–1) 1 W3 ted Fuzzy input layer W0 Inputs x1, x2, ..., xn Part A 60.2 c THEN < the output y is calculated by the formula distance formula variables are also weighted. Each per- y = 0.01 + 0.7 × 1 + 0.12 × 2 > (60.2) sonalized model can be represented as a rule (or a set rre of rules) that represents the personalized profile for the In the case of EFuNN, local simple fuzzy rule mod- new input vector. The TWNFI model is evolving as new els are derived, for example, data samples, added to a data set, can be used in any further personalized model development. This includes IF × 1 is (Medium 0.8) and × 2 is (Low 0.6) using different sets of variables and features [60.30,36]. THEN y is (High 0.7), radius R = 0.24; ECOS have been applied to both model brain func- Nexamp = 6 , (60.3) tions and as general CI tools [60.30]. In one application, an ECOS was trained to classify EEG data measured where: low, medium and high are fuzzy membership from a single person’s brain, into four classes repre- co functions defined for the range of each of the variables senting four perceptual states – hearing, seeing, both, ×1, ×2, and y; the number and the type of the mem- and nothing [60.30]. In another application, ECOS were bership functions can either be deduced from the data used to model emerging acoustic clusters, when multi- SPIN: 12742608 (Springer Handbook of Bio-/Neuroinformatics) through learning algorithms, or can be predefined based ple spoken languages are learned [60.30]. on human knowledge [60.34, 35]; R is the radius of the ECOS have been applied to a wide range of CI cluster; and Nexamp is the number of examples in the applications, such as adaptive classification of gene ex- cluster. pression data, adaptive robot control, adaptive financial Un A further development of the EFuNN and the DEN- data modeling, adaptive environmental, and social data FIS local ECOS models is the transductive weighted modeling [60.30]. neuro-fuzzy inference engine (TWNFI) [60.30, 36]. In ECOS are used in Sect. 60.3 for building GRN this approach, for every new vector (sample/example S) models. Created on: 17 April 2013 10:50 CET a personalized model is developed from existing near- est samples, where each of the variables is normalized 60.2.2 Spiking Neural Networks – in a different subrange of [0,1] so that they have a dif- Strongly Brain Inspired Models ferent influence on the Euclidean distance from (60.1), MS ID: hb23-060 Proof 1 therefore they are ranked in terms of their importance Spiking models of a neuron and of neural networks – to the output calculated for any new sample individu- SNN, have been inspired and developed to mimic more ally. Samples are also weighted in the model based on biologically the spiking activity of neurons in the brain their distance to the new sample, where in the Euclidean when processing information. Index entries on this page f oo Brain, Gene, and Quantum Inspired Computational Intelligence CI and ANN Models Inspired by Neuronal and . . . TS2 5 slow excitation, and slow inhibition, all integrated in Integration Spike + leakage formula [60.13]. These types of PSPs are based on neurobiology [60.38] and will be the basis for the de- Pr x1 velopment of the computational neurogenetic model in Refractory period x2 Sect. 60.4, where the different synaptic activities are x3 represented as functions of different proteins (neuro- Binary events x4 transmitters and neuroreceptors). External inputs from the input layer are added at Fig. 60.3 A general representation of a spiking neuron each time step, thus incorporating the background noise model (after [60.13]) and/or the background oscillations. Each external input ext input has its own weight Jik and amount of signal εk (t), ted One model – the spike response model (SRM) of a neuron [60.31, 37] is described below and extended in such that ext input ext input ui (t) = Jik εik (t). (60.6) Sect. 60.4 to a CNGM. A neuron i receives input spikes from presynaptic It is optional to add some degree of Gaussian noise neurons j ∈ Γi , where Γi is a pool of all neurons presy- to the right-hand side of the equation above to obtain naptic to neuron i. The state of the neuron i is described a stochastic neuron model instead of a deterministic by the state variable u i (t) that can be interpreted as one. a total postsynaptic potential (PSP) at the membrane of SNN models can be built with the use of the above soma (Fig. 60.3). When u i (t) reaches a firing threshold spiking neuron model. Spiking neurons within an SNN Part A 60.2 c ϑi (t) TS4 , neuron i fires, i. e., emits a spike. The value can be either excitatory or inhibitory. Lateral connec- of the state variable u i (t) is the sum of all postsynaptic tions between neurons in an SNN may have weights potentials, i. e., that decrease in value with distance from neuron i for rre instance, according to a Gaussian formula, while the u i (t) = Jij t − t j − Δijax . (60.4) connections between neurons themselves can be estab- j∈Γi t j ∈F j lished at random. The weight of the synaptic connection from neuron SNN can be used to build biologically plausi- j to neuron i is denoted by Jij . It takes positive (neg- ble models of brain functions. Examples are given ative) values for excitatory (inhibitory) connections, in [60.13, 31, 37, 38]. Figure 60.4 graphically shows an respectively. Depending on the sign of Jij , a presynap- application of an SNN to model brain functions that tic spike generated at time t j increases (or decreases) connect signals from the thalamus to the temporal cor- tex (from [60.13]). co u i (t) by an amount εij (t − t j − Δijax ). Δijax is an axonal delay between neurons i and j which increases with Other applications of SNN include image recogni- Euclidean distance between neurons. tion. In [60.39] an adaptive SNN model is developed The positive kernel εij (t − t j − Δijax ) = εij (s) ex- where new SNN submodules (maps) are created in- SPIN: 12742608 (Springer Handbook of Bio-/Neuroinformatics) presses an individual postsynaptic potential (PSP) crementally to accommodate new data samples over evoked by a presynaptic neuron j on neuron i. A double time. For example, a new submodule of several spik- exponential formula can be used ing neurons and connections evolves when a new class of objects (e.g., a new face in the case of a face Un synapse s recognition problem) is presented to the system for εij (S)A synapse exp synapse learning at any time of this process. When there are τdecay no active inputs presented to the system, the system s merges close spiking neuronal maps depending on their − exp − synapse . (60.5) Created on: 17 April 2013 10:50 CET τrise similarity. Developing new methods for learning in evolv- synapse The following notations are used above: τdecay/rise ing SNN is a challenging direction for future research are time constants of the rise and fall of an indi- with a potential for applications in both computational MS ID: hb23-060 Proof 1 vidual PS, A is the PSP’s amplitude, and synapse neuroscience and pattern recognition, e.g., multimodal represents the type of the activity of the synapse from information processing – speech, image, odor, gestures, the neuron j to neuron i that can be measured and etc. modeled separately for fast excitation, fast inhibition, SNN are extended to CNGM in Sect. 60.4. TS 4 Please confirm. Editor’s or typesetter’s annotations (will be removed before the final TeX run) Index entries on this page f oo 6 Part A Understanding Information Processes in Biological Systems Fig. 60.4 An example of a SNN to Cortex Jij Gaussian lateral model a function of the cortex with connections σij Spiking internal inputs from the thalamus and Pr neural external input stimuli. About 20% of network N = 120 neurons are inhibitory neu- One-to-many rons that are randomly positioned on feedforward the grid (filled circles). External in- input connections put is random with a defined average frequency (e.g., between 10–20 Hz) Input layer (after [60.13]) Thalamus 60.2.3 Open Questions ted • How is a balance between structure definition and learning achieved in ANN? Further development of brain-like or brain inspired • How can ANN evolve and optimize their parameters ANN requires some that some questions be addressed: and input features over time in an efficient way? • How can incremental learning in ANN be applied • How much should an ANN mimic the brain in order without the presentation of an input signal (e.g., to become an efficient CI model? sleep learning)? Part A 60.3 c 60.3 Gene Inspired Methods of Computational Intelligence rre 60.3.1 The Central Dogma in Molecular make pairs such that every A from one strand is con- Biology and GRN nected to a corresponding T on the opposite strand and every C is connected to a G. A gene is a sequence of The central dogma of molecular biology states that hundreds and thousands of bases as part of the DNA that DNA, which resides in the nucleus of a cell or a neuron, is translated into protein. Only less than 5% of the DNA transcribes into RNA and then translates into proteins, of the human genome constitutes genes, the other part which process is continuous, evolving, so that proteins, is a noncoding region that contains useful information called transcription factors, cause genes to transcribe, as well. etc. [60.40, 41] (Fig. 60.5). The DNA of each organism is unique and resides in co The DNA is a long, double stranded sequence the nucleus of each of its cells. But it is the proteins that (a double helix) of millions or billions of 4 base are expressed from the genes and define the function of molecules (nucleotides) denoted as A, C, T, and G, the cell that make a cell alive. The genes and proteins in SPIN: 12742608 (Springer Handbook of Bio-/Neuroinformatics) which are chemically and physically connected to each each cell are connected in a dynamic GRN consisting of other through other molecules. In the double helix, they regulatory pathways. Normally, only a few hundreds of genes are expressed as proteins in a particular cell. At the tran- Un Output cell function scription phase, one gene is transcribed in many RNA Translation mRNA into protein copies and their number defines the expression level Transcription Genes copied as production of this gene [60.40, 41]. Some genes may be over- mRNA expressed, resulting in too much protein in the cell, Created on: 17 April 2013 10:50 CET DNA- some genes may be under-expressed resulting in too lit- RNA Proteins genes tle protein; in both cases the cell may be functioning in a wrong way, which may be causing a disease. Ab- normal expression of a gene can be caused by a gene MS ID: hb23-060 Proof 1 Protein-gene feedback loop through transcription factors mutation – a random change in the code of the gene, Fig. 60.5 The genes in the DNA transcribe into RNA and then where a base molecule is either inserted or deleted, or translate into proteins that define the function of a cell. (The central altered into another base molecule. Drugs can be used dogma of molecular biology) to stimulate or suppress the expression of certain genes Index entries on this page f roo Brain, Gene, and Quantum Inspired Computational Intelligence 60.3 Gene Inspired Methods of Computational Intelligence 7 and proteins, but how that will affect indirectly the other Log10 (expression) genes related to the targeted one must be evaluated and 2 this is where computational modeling of GRN can help. It is always difficult to establish the interaction 1.5 between genes and proteins. The question What will dP happen with a cell or the whole organism if one gene is 1 under-expressed or missing? is now being attempted by 0.5 the use of a technology called knock-out gene technol- ogy. This technology is based on the removal of a gene 0 sequence from the DNA and letting the cell/organism to –0.5 develop, where parameters are measured and compared with the parameters when the gene was not missing. –1 0 5 10 15 20 Time (h) 60.3.2 GRN-ANN Models Fig. 60.6 Time-course gene expression data representing the re- cte Modeling GRN is the task of creating a dynamic sponse of thousands of genes of fibroblast to serum (after [60.42]) interaction network between genes that defines the next time expression of genes based on their previous and the expression of the genes at the next time moment time expression. A detailed discussion of the meth- G(t + dt), e.g., ods for GRN modeling can be found in [60.41, 43, Part A 60.3 44]. Models of GRN, derived from gene expression IF g13 (t) is High (0.87) and g23 (t) is Low (0.9) RNA data, have been developed with the use of differ- THEN g87 (t + dt) is High (0.6) and ent mathematical and computational methods, such as: g103 (t + dt) is Low. (60.7) rre statistical correlation techniques; evolutionary compu- tation; ANN; differential equations, both ordinary and Through modifying a threshold for rule extraction partial; Boolean models; kinetic models; state-based one can extract stronger or weaker patterns of a dynamic models; and others [60.41]. relationship. A model of GRN, trained on time-course data is Adaptive training of an ECOS makes incremental presented in [60.42] where the human response to fi- learning of a GRN possible, as well as adding new in- broblast serum data is used (Fig. 60.6) and a GRN is puts/outputs (new genes) to the GRN CE6 . extracted from it (Fig. 60.7). The method uses a genetic algorithm to select the initial cluster centers of the time co course clustered gene expression values and then applies 3 –0.4 a Kalman filter to derive the gene connecting equations. 5 In [60.44] a GRN-ECOS is proposed and applied on –0.5 –0.3 SPIN: 12742608 (Springer Handbook of Bio-/Neuroinformatics) small-scale cell line gene expression data. An ECOS is 0.8 evolved with inputs being the expression level of a cer- –0.4 8 0.4 tain number of selected genes (e.g., 4) at a time moment 1 (t) and the outputs being the expression level of the 0.5 Un –0.4 0.6 same or other genes/proteins at the next time moment 7 (t + dt). After an ECOS is trained on time course gene 10 –0.4 6 0.8 expression data, rules are extracted from the ECOS and 0.3 linked between each other in terms of time-arrows CE5 2 9 4 Created on: 17 April 2013 10:50 CET of their creation in the model, thus representing the –0.3 GRN. The rule nodes in an ECOS capture clusters of input genes that are related to the output genes/proteins Fig. 60.7 A GRN obtained with the use of the method at the next time moment. Figure 60.7 shows an example from [60.42] on the data from Fig. 60.5 after the time MS ID: hb23-060 Proof 1 of EFuNN used for modeling GRN [60.33, 44]. gene expression series are clustered into 10 clusters. The The rules extracted from an EFuNN model, for nodes represent gene clusters while the arcs represent the example, represent the relationship between the gene dynamic relation (interaction) between these gene groups expression of a group of genes G(t) at a time moment t over consecutive time moments CE 5 Please check terminology. CE 6 Please check that this is the intended meaning. Editor’s or typesetter’s annotations (will be removed before the final TeX run) Index entries on this page f oo 8 Part A Understanding Information Processes in Biological Systems A set of DENFIS models can be trained, one for The ECOS structure from Fig. 60.2 can be used in each gene gi , so that an input vector is the expres- a multilevel, hierarchical way, where the transcription sion vector G(t) and the output is a single variable process is represented in one ECOS and translation in Pr gi (t + dt). DENFIS allows for a dynamic partitioning another ECOS, which inputs are connected to the out- of the input space. Takagi–Sugeno fuzzy rules, which puts of the first one, using feedback connections to represent the relationship between gene gi with the rest represent transcription factors. of the genes, are extracted from each DENFIS model, Despite the variety of different methods used so far e.g., for modeling GRN and for systems biology in general, there is no single method that will suit all requirements IF g1 is (0.63, 0.70, 0.76) and to model a complex biological system, especially to g2 is (0.71, 0.77, 0.84) and meet the requirements for adaptation, robustness, and g3 is (0.71, 0.77, 0.84) and g4 is (0.59, 0.66, 0.72) ted information integration. In the next section GRN modeling is integrated with SNN to model the interaction between genes/proteins in THEN g5 = 1.84 − 1.26g1 − 1.22g2 relation to activity of a spiking neuron and an SNN as + 0.58g3 − 0.03g4 . (60.8) a whole. 60.4 Computational Neurogenetic Models Part A 60.4 c 60.4.1 General Notions Finally, a future neuronal state C of the brain will depend on its current state C and also on the neuronal N With the advancement of molecular and brain research and the molecular M state, and on the external stimuli rre technologies more and more data and information are Ec being made available about the genetic basis of some neuronal functions (see, for example, the brain-gene C = Fc(C, N, M, Ec). (60.11) map of a mouse [60.45] and the brain-gene ontology BGO in [60.46]). The above set of equations (or algorithms) is a gen- This information can be utilized to create biolog- eral one and in different cases it can be implemented ically plausible ANN models of brain functions and differently, e.g., one gene – one neuron/brain func- diseases that include models of gene interaction. This tion; multiple genes – one neuron/brain function, no area integrates knowledge from computer and informa- interaction between genes; multiple genes – multi- co tion science, brain science, and molecular genetics and ple neuron/brain functions, where genes interact in it is here called CNGM [60.2]. a GRN and neurons also interact in a neural net- A CNGM integrates genetic, proteomic, and brain work architecture; multiple genes – complex brain/ SPIN: 12742608 (Springer Handbook of Bio-/Neuroinformatics) activity data and performs data analysis, modeling, cognitive function/s, where genes interact within GRN prognosis, and knowledge extraction that reveals the and neurons interact in several hierarchical neural relationship between brain functions and genetic in- networks. formation. Let us look at this process as a process of Several CNGM models have been developed so far, Un building mathematical function or a computational al- varying from modeling a single gene in a biologically gorithm as follows. realistic ANN model [60.3] to modeling a set of genes A future state of a molecule M or a group of forming an interaction GRN [60.13,43]. In the next sec- molecules (e.g., genes and proteins) depends on its cur- tion we give an example of a CNGM that combines Created on: 17 April 2013 10:50 CET rent state M and on an external signal Em SNN and GRN into one model [60.13]. M = Fm(M, Em). (60.9) 60.4.2 A Computational Neurogenetic A future stateNof a neuron or an ensemble of neu- Model that Integrates GRN Within MS ID: hb23-060 Proof 1 rons will depend on its current state N and on the state an SNN Model of the molecules M (e.g., genes) and on external signals En The main idea behind the model proposed in [60.2] is N = Fn(N, M, En). (60.10) that interaction of genes in neurons affect the dynam- Index entries on this page f oo Brain, Gene, and Quantum Inspired Computational Intelligence 60.4 Computational Neurogenetic Models 9 GRN ANN Output Relative intensity ratio (0.1–3.5 Hz) (12.5–18 Hz) Pr IGF-1 1 (3.5–7.5 Hz) (18–30 Hz) FGF-2 GALR1 (7.5–12.5 Hz) (30–50 Hz) BDNF NOS 0.8 Jerky 100bet 0.6 C1C c-jun 0.4 KC mGluR3 0.2 NaC GABRA NMDAR GABRB 0 AMPAF 10 000 20 000 30 000 40 000 50 000 60 000 ted Fig. 60.8 A CNGM, where a GRN is used to represent the interaction of genes, and a SNN is employed to model a brain Time (ms) function. The model output is compared against real brain data for validation of the model and for verifying the derived gene interaction GRN after model optimization is applied [60.13] ics of the whole ANN through neuronal parameters, p j (t + Δt) = pmax − pmin j j which are no longer constant but change as a function n of gene/protein expression. Through optimization of the GRN, the initial gene/protein expression values, and the ×σ w jk gk (t) + pmin j . (60.13) Part A 60.4 c ANN parameters, particular target states of the ANN k=1 can be achieved, so that the ANN can be tuned to model The delay constant introduced in the formula corre- real brain data in particular. sponds to the delay caused by the gene transcription, rre This idea is illustrated in Fig. 60.8. The behavior of mRNA translation into proteins and posttranslational the SNN is evaluated by means of the local field poten- protein modifications, and also the delay caused by gene tial (LFP), thus making it possible to attempt modeling transcription regulation by transcription factors. the role of genes in different brain states, where EEG Some proteins and genes are known to affect the data is available to test the model. A standard FFT signal spiking activity of a neuron represented in an SNN processing technique is used to evaluate the SNN output model by neuronal parameters, such as fast excitation, and to compare it with real human EEG data. A broader fast inhibition, slow excitation, and slow inhibition theoretical and biological background of CNGM con- (Sect. 60.2). Some neuronal parameters and their cor- struction is given in [60.13]. respondence to particular proteins are summarized in co In general, we consider two sets of genes – a set Table 60.1. G gen that relates to general cell functions and a set G spec Besides the gene coding for the proteins mentioned that defines specific neuronal information-processing above and those directly affecting the spiking dynam- SPIN: 12742608 (Springer Handbook of Bio-/Neuroinformatics) functions (receptors, ion channels, etc.). The two sets ics of a neuron CE7 , a GRN model can include other together form a set G = {G 1 , G 2 , . . . , G n }. We assume genes relevant to a problem in hand, e.g., modeling that the expression level of each gene is a nonlin- a brain function or a brain disease. In [60.13] these ear function of expression levels of all the genes genes/proteins are c-jun, mGLuR3, Jerky, BDNF, FGF- Un in G 2, IGF-I, GALR1, NOS, and S100beta [60.13]. n The goal of the CNGM in Fig. 60.8 is to achieve g j (t + Δt ) = σ w jk gk (t) . (60.12) a desired SNN output through optimization of the model parameters. The LFP of the SNN, defined as k=1 LFP = (1/N)Σu i (t), by means of FFT is evaluated in Created on: 17 April 2013 10:50 CET In [60.13] it is assumed that: order to compare the SNN output with the EEG signal analyzed in the same way. It has been shown that brain 1. One protein is coded by one gene. LFPs in principle have the same spectral characteristics MS ID: hb23-060 Proof 1 2. The relationship between the protein level and the as EEG [60.47]. gene expression level is nonlinear. In order to find an optimal GRN within the SNN 3. Protein levels lie between the minimal and maximal model, so that the frequency characteristics of the values. Thus, the protein level is expressed by LFP of the SNN model are similar to the brain EEG CE 7 Please check that this is the intended meaning. Editor’s or typesetter’s annotations (will be removed before the final TeX run) Index entries on this page f oo 10 Part A Understanding Information Processes in Biological Systems Table 60.1 Neuronal parameters and related proteins (18–30 Hz), and gamma (above 30 Hz). This particu- (PSP; AMPAR: (amino-methylisoxazole-propionic acid) lar SNN had an evolved GRN with only 5 genes out of ampa receptor; NMDAR: (N-methyl-D-aspartate acid) 16 (s100beta, GABRB, GABRA, mGLuR3, c-jun), all Pr NMDA receptor; GABRA: (gamma-aminobutyric acid) other genes having constant expression values. A GRN GABAA receptor; GABRB: GABAB receptor; SCN: is obtained that has a meaningful interpretation and can sodium voltage-gated channel; KCN: kalium (potassium) be used to model what will happen if a gene/protein is voltage-gated channel; CLC: chloride channel; PV: parval- suppressed by administering a drug, for example. bumin) In evolving CNGM new genes can be added to Neuronal parameter amplitude and Protein the GRN model at a certain time, in addition to the time constants of new spiking neurons and connections created incremen- Fast excitation PSP AMPAR tally, as is the case in evolving SNN. Developing new Slow excitation PSP Fast inhibition PSP tedNMDAR GABRA evolving CNGM to model brain functions and brain diseases such as epilepsy, Alzheimer’s, Parkinson’s dis- Slow inhibition PSP GABRB ease, schizophrenia, mental retardation, and others is Firing threshold SCN, KCN, CLC a challenging problem for future research [60.13, 43]. Late excitatory PSP through GABRA PV 60.4.3 Open Questions characteristics, the following evolutionary computation Some questions emerged from the first CNGM experi- procedure is used: ments: Part A 60.4 c 1. Generate a population of CNGMs, each with • How many different GRNs would lead to similar randomly, but constrained, generated values of co- LFPs and what do they have in common? efficients for the GRN matrix W, initial gene • What neuronal parameters should be included in an rre expression values g(0), initial values of SNN param- ANN model and how can they be linked to activities eters P(0), and different connectivity. of genes/proteins? 2. Run each SNN model over a period of time T and • What genes/proteins should be included in the record the LFP. model and can the gene interaction be represented 3. Calculate the spectral characteristics of the LFP us- over time within each neuron? ing FFT. • How can the output activity of the ANN and the 4. Compare the spectral characteristics of SNN LFP to genes be integrated in time, as it is known that neu- the characteristics of the target EEG signal. Evalu- rons spike in millisecond intervals and the process co ate the closeness of the LFP signal for each SNN to of gene transcription and translation into proteins the target EEG signal characteristics. Proceed fur- takes minutes? ther according to the standard GA algorithm to find • How can a CNGM be created and evaluated in a sit- a SNN model that matches the EEG spectral char- uation of insufficient data? SPIN: 12742608 (Springer Handbook of Bio-/Neuroinformatics) acteristics better than previous solutions. • How can brain activity and the CNGM activity be 5. Repeat steps 1 to 4 until the desired GRN and SNN measured in order to validate the model? model behavior is obtained. • What useful information (knowledge) can be de- Un 6. Analyze the GRN and the SNN parameters for sig- rived from a CNG model? nificant gene patterns that cause the SNN model to • How can a CNGM model be adapted incremen- manifest similar spectral characteristics as the real tally in a situation of new incoming data about brain data. functions and genes related to them? Created on: 17 April 2013 10:50 CET The proposed CNGM modeling framework can be Integrating principles from gene and neuronal infor- used to find patterns of gene regulation related to brain mation processing in a single ANN model raises many functions. In [60.13] some preliminary results of anal- other, more general, questions that need to be addressed ysis performed on real human interictal EEG data are in the future, for example: MS ID: hb23-060 Proof 1 presented. The model performance and the real EEG data are compared for the following relevant to the prob- • Is it possible to create a truly adequate CNGM of lem subbands: delta (0.5–3.5 Hz), theta (3.5–7.5 Hz), the whole brain? Would gene-brain maps help in this alpha (7.5–12.5 Hz), beta 1 (12.5–18 Hz), beta 2 respect [60.3]? Index entries on this page f oo Brain, Gene, and Quantum Inspired Computational Intelligence 60.5 Quantum Inspired CI 11 • How can dynamic CNGM be used to trace over time • How can CNGM be used to predict drug effects? and predict the progression of a brain diseases, such • How can CNGM help us to understand brain func- as epilepsy and Parkinson’s? tions better, such as memory and learning? Pr • How can CNGM be used to model gene mutation • What CI problems can be efficiently solved with the effects? use of a brain-gene inspired ANN? 60.5 Quantum Inspired CI 60.5.1 Quantum Level of Information defined as Processing ted |ψ = ci |ϕi , |ci | 2 = 1; , (60.14) At the quantum level, particles (e.g., atoms, electrons, where the coefficients ci may be complex. |ψ is said ions, photons, etc.) are in a complex evolving state all to be in a superposition of the basis states |ϕi . For ex- the time. The atoms are the material that everything is ample, the quantum inspired analog of a single bit in made of. They can change their characteristics due to classical computers can be represented as a qu-bit in the frequency of external signals. Quantum computa- a quantum computer tion is based upon physical principles from the theory of quantum mechanics [60.48]. |x = a|0 + b|1; , Part A 60.5 (60.15) c One of the basic principles is the linear superpo- sition of states. At a macroscopic or classical level where |0 and |1 represent the states 0 and 1, and a and a system exists only in a single basis state as energy, b their probability amplitudes, respectively. The qu-bit rre momentum, position, spin, and so on. However, at a is not a single value entity, but is a function of parame- microscopic or quantum level a quantum particle (e.g., ters whose values are complex numbers. After the loss atom, electron, positron, ion) or a quantum system is of coherence the qu-bit will collapse into one of the in a superposition of all possible basis states. At the states |0 or |1 with the probability a2 for the state |0 microscopic level any particle can assume different po- and probability b2 for the state |1. sitions at the same time moment, can have different The state of a quantum particle (represented, for ex- values of energy, can have different spins, and so on. ample, as a qu-bit) can be changed by an operator called This superposition principle is counterintuitive because a quantum gate. A quantum gate is a reversible gate and in classical physics one particle has only one position, can be represented as a unitary operator U acting on co energy, spin, etc. the qu-bit basis states. The defining property of a uni- If a quantum system interacts in any way with its en- tary matrix is that its conjugate transpose is equal to its vironment, the superposition is assumed to be destroyed inverse. Several quantum gates have been introduced, SPIN: 12742608 (Springer Handbook of Bio-/Neuroinformatics) and the system collapses into one single real state as such as the NOT gate, controlled NOT gate, rotation in the classical physics (Heisenberg). This process is gate, Hadamard gate, etc. [60.49–52]. governed by a probability amplitude. The square of the intensity for the probability amplitude is the quantum 60.5.2 Why Quantum Inspired CI? Un probability to observe the state. Another quantum mechanics principle is entan- Quantum mechanical computers and quantum algo- glement – two or more particles, regardless of their rithms try to exploit the massive quantum parallelism location, are in the same state with the same probability which is expressed in the principle of superposition. Created on: 17 April 2013 10:50 CET function. The two particles can be viewed as correlated, The principle of superposition can be applied to many undistinguishable, synchronized, coherent. An example existing methods of CI, where instead of a single state is a laser beam consisting of millions of photons having (e.g., a parameter value, or a finite automaton state, or the same characteristics and states. a connection weight, etc.) a superposition of states will MS ID: hb23-060 Proof 1 Quantum systems are described by a probability be used, described by a wave probability function, so density ψ that exists in a Hilbert space. The Hilbert that all these states will be computed in parallel, re- space has a set of states |ϕi forming a basis. A sys- sulting in an increased speed of computation by many tem can exist in a certain quantum state |ψ, which is orders of magnitude [60.5, 8, 9, 49–57]. Index entries on this page f oo 12 Part A Understanding Information Processes in Biological Systems Quantum mechanical computers were proposed in Evolutionary computing with qu-bit representa- the early 1980s and a description was formalized in tion has a better characteristic of population diversity the late 1980s. These computers, when implemented, than other representations, since it can represent linear Pr are expected to be superior to classical computers in superposition of states probabilistically. The qu-bit rep- various specialized problems. Much effort has been resentation leads to a quantum parallelism of the system made to extend the principal ideas of quantum mechan- as it is possible to evaluate the fitness function on a su- ics to other fields of interest. There are well-known perposition of possible inputs. The output obtained is quantum algorithms such as Shor’s quantum factoring also in the form of superposition, which needs to be algorithm [60.58] and Grover’s database search algo- collapsed to obtain the actual solution. rithm [60.50, 54]. Recent research activities have focussed on using The advantage of quantum computing is that while quantum principles for ANN [60.4, 5, 61–63]. Consid- ted a system is uncollapsed it can carry out more computing than a collapsed system, because, in a sense, it is com- ering quantum ANN seems to be important for at least two reasons. There is evidence for the role that quan- puting in many universes at once. The above quantum tum processes play in the living brain. Penrose argued principles have inspired research in both computational that a new physics binding quantum phenomena with methods and brain study. general relativity can explain such mental abilities as New theories (some of them speculative at this understanding, awareness, and consciousness [60.9]. stage) have already been formulated. For example, The second motivation is the possibility that the field Penrose [60.8, 9] argues that solving the quantum mea- of classical ANN could be generalized to the promising surement problem is prerequisite for understanding the new field of quantum computation [60.53]. Both con- Part A 60.6 c mind and that consciousness emerges as a macroscopic siderations suggest a new understanding of mind and quantum state due to a coherence of quantum-level brain functions, as well as new unprecedented abilities events within neurons. in information processing. Ezhov and Ventura consider rre quantum neural networks as the next natural step in the 60.5.3 Quantum Inspired Evolutionary evolution of neurocomputing systems [60.4]. Computation and Connectionist Several quantum inspired ANN models have been Models proposed and illustrated on small examples. In [60.63] QIEA is used to train a MLP ANN. Narayanan and Quantum inspired methods of evolutionary computa- Meneer simulated classical and quantum inspired ANN tion (QIEC) and other techniques were proposed and and compared their performances [60.5]. Their work discussed in [60.51, 55]. They include genetic program- suggests that there are, indeed, certain types of prob- ming [60.59], particle swarm optimizers [60.60], finite lems for which quantum neural networks will prove co automata and Turing machines, etc. superior to classical ones. In QIEC, a population of n qu-bit individuals at Other relevant work includes quantum decision time t can be represented as making, quantum learning models [60.64], quantum SPIN: 12742608 (Springer Handbook of Bio-/Neuroinformatics) networks for signal recognition [60.62], and quantum associative memory [60.61, 65]. There are also recent Q(t) = q1t , q2t , . . . qnt , (60.16) approaches to quantum competitive learning where the quantum system’s potential for excellent performance is Un where n is the size of the population. demonstrated on real-world data sets [60.66, 67]. 60.6 Towards the Integration of Brain, Gene, and Quantum Information Created on: 17 April 2013 10:50 CET Processing Principles: A Conceptual Framework for Future Research 60.6.1 Quantum Inspired SNN can expect that QI-SNN and QI-CNGM would open new possibilities for modeling gene–neuron interactions MS ID: hb23-060 Proof 1 In Sect. 60.4 we presented a CNGM that integrated prin- related to brain functions and to new efficient AI appli- ciples from neuronal information processing and gene cations. information processing in the form of integrating SNN The CNGM from Sect. 60.4 linked principles of in- with GRN. Following some ideas from QI-ANN, we formation processing in gene/protein molecules with Index entries on this page f oo Brain, Gene, and Quantum Inspired Computational Intelligence Towards the Integration of Brain, Gene, and Quantum . . . TS2 13 neuronal spiking activity, and then – to the infor- QI-CNGM, which we intend to investigate and develop mation processing of a neuronal ensemble, that is as future research CE8 . measured as local field potentials (LFP). How the quan- The following is a list of equations that include Pr tum information processes in the atoms and particles quantum particle states and functions (hypothetical at (ions, electrons, etc.), that make the large gene/protein this stage) into (60.9)–(60.11) and (60.18)–(60.20), molecules, relate to the spiking activity of a neuron and starting with a new (60.17) that is concerned only with to the activity of a neuronal ensemble, is not known yet the level of quantum particle states. and it is a challenging question for the future. A future state Q of a particle or a group of particles What is known at present, is that the spiking activity (e.g. ions, electrons, etc.) depends on the current state Q of a neuron relates to the transmission of ions and neu- and on the frequency spectrum Eq of an external signal, rotransmitter molecules across the synaptic clefts and according to the Max Planck constant ted to the emission of spikes. Spikes, as carriers of infor- mation, are electrical signals made of particles that are Q = Fq (Q, Eq)) . (60.17) emitted in one neuron and transmitted along the nerves A future state of a molecule M or a group of to many other neurons. These particles are characterized molecules (e.g., genes, proteins) depends on its current by their quantum properties. So, quantum properties state M, on the quantum state Q of the particles, and on may influence, under certain conditions, the spiking ac- an external signal Em: tivity of neurons and of the whole brain, as brains obey the laws of quantum mechanics (as everything else in M = Fm (Q, M, Em) . (60.18) the material world does). N Part A 60.6 A future state of a spiking neuron or an ensemble c Similarly to a chemical effect of a drug to the protein of neurons will depend on its current state N, on the and gene expression levels in the brain, which may af- state of the molecules M, on the state of the particles Q, fect the spiking activity and the functioning of the whole and on external signals En rre brain (modeling of these effects is subject of the com- putational neurogenetic modeling), external factors like N = Fn (N, M, Q, En) . (60.19) radiation, light, high frequency signals, etc., can influ- Finally, a future neuronal state of the brain will C ence the quantum properties of the particles in the brain depend on its current state C and also on the neuronal through gate operators. According to Penrose [60.9] mi- N, on the molecular M, and on the quantum Q states of crotubules in the neurons are associated with quantum the brain: gates, even though what constitutes a quantum gate in the brain is still a highly speculative topic. C = Fc (C, N, M, Q, Ec) . (60.20) So, the question is: Is it possible to create an SNN co The above hypothetical model of integrated function model and a CNGM that incorporate some quantum representations is based on the following assumptions: principles? A QI-SNN can be developed as an extension of • A large number of atoms are characterized by the SPIN: 12742608 (Springer Handbook of Bio-/Neuroinformatics) the concept of evolving SNN [60.39] using the super- same quantum properties, possibly related to the position principle, where instead of many SNN maps, same gene/protein expression profile of a large num- each representing one object (e.g., a face), there will ber of neurons characterized by spiking activity that be a single SNN, where both connections and neurons can be represented as a function. Un are represented as particles, being in many states at the • A large neuronal ensemble can be represented by same time defined as probability wave function. When a single LFP function. an input vector is presented to the QI-SNN, the net- • A cognitive process can be represented, at an ab- work collapses in a single SNN defining the class of the stract level, as a function Fc that depends on all Created on: 17 April 2013 10:50 CET recognized input vector. lower levels of neuronal, genetic, and quantum activities. 60.6.2 A Conceptual Framework of a QI-CNGM 60.6.3 Open Questions MS ID: hb23-060 Proof 1 Here we extend the concept of CNGM (60.9)–(60.11) Several reasons can be given in support of the research by introducing the level of quantum information pro- on integrating principles from quantum, molecular, and cessing. This results in a conceptual and hypothetical brain information processing into future CI models: CE 8 Please check that this is the intended meaning. Editor’s or typesetter’s annotations (will be removed before the final TeX run) Index entries on this page f oo 14 Part A Understanding Information Processes in Biological Systems • This may lead to a better understanding of neuronal, • At the nanolevel of microelectronic devices, quan- molecular, and quantum information processes. tum processes would have a significant impact and • This may lead to new computer devices – a million new methods of computation would be needed Pr times faster and more accurate than the current ones. anyway. 60.7 Conclusions and Directions for Further Research This chapter presents some CI models inspired by prin- • How does the energy in the atoms relate to the ciples from different levels of information processing energy of the proteins, the cells, and the whole ted in the brain – including neuronal level, gene/protein level, and quantum level, and argues that CI models • brain? Would it be beneficial to develop different QI com- that integrate principles from different levels of infor- putational intelligence techniques, such as QI-SVM, mation processing would be useful tools for a better QI-GA, QI-decision trees, QI-logistic regression, understanding of brain functions and for the creation of QI-cellular automata, and QI-ALife? more powerful methods and systems of computational • How do we implement QI computational intelli- intelligence. gence algorithms in order to benefit from their high Many open questions need to be answered in the speed and accuracy? Should we wait for the quan- future, some of these are: tum computers to be realized many years from now, Part A 60 c or we can implement them efficiently on specialized • How do quantum processes affect the functioning of computing devices based on classical principles of a living system in general? physics? • rre How do quantum processes affect cognitive and mental functions? Further directions in our research are: • Is it true that the brain is a quantum machine – work- ing in a probabilistic space with many states (e.g., • Building a brain–gene-quantum ontology system thoughts) being in a superposition all the time and that integrates facts, information, knowledge, and it is only when we formulate our thought through CI models of different levels of information process- speech or writing that the brain collapses in a single ing in the brain and their interaction. state? • Building novel brain, gene, and quantum inspired CI • Is fast pattern recognition in the brain, involving models, studying their characteristics, and interpret- co far away segments, a result of both parallel spike ing the results. transmissions and particle entanglement? • Applying the new methods to solving complex CI • Is communication between people and between liv- problems in neuroinformatics and brain diseases, SPIN: 12742608 (Springer Handbook of Bio-/Neuroinformatics) ing organisms in general a result of entanglement bioinformatics and cancer genetics, multimodal in- processes? formation processing, and biometrics. References Un 60.1 C. Bishop: Neural Networks for Pattern Recognition 60.5 A. Narayanan, T. Meneer: Quantum artificial neural (Oxford Univ. Press, Oxford, UK 1995) network architectures and components, Inf. Sci. 60.2 N. Kasabov, L. Benuskova: Computational neuro- TS12 , 199–215 (2000) genetics, Int. J. Theor. Comput. Nanosci. 1(1), 47–61 60.6 D.B. Fogel: Evolutionary Computation – Toward Created on: 17 April 2013 10:50 CET (2004) a New Philosophy of Machine Intelligence (IEEE, 60.3 G. Marcus: The Birth of the Mind: How a Tiny Num- New York 1995) ber of Genes Creates the Complexity of the Human 60.7 X. Yao: Evolutionary artificial neural networks, Int. Mind (Basic, New York 2004) J. Neural Syst. 4(3), 203–222 (1993) MS ID: hb23-060 Proof 1 60.4 A. Ezhov, D. Ventura: Quantum neural networks. In: 60.8 R. Penrose: Shadows of the Mind. A Search for the Future Directions for Intelligent Systems and Infor- Missing Science of Conscious (Oxford Univ. Press, mation Sciences, ed. by N. Kasabov (Springer, TS9 Oxford 1994) 2000) pp. 213–234 TS9 Please supply the publisher’s location. Editor’s or typesetter’s annotations (will be removed before the final TeX run) Index entries on this page f oo Brain, Gene, and Quantum Inspired Computational Intelligence References 15 60.9 R. Penrose: The Emperor’s New Mind (Oxford Univ. 60.29 T. Poggio: Regularization theory, radial basis func- Press, Oxford 1989) tions and networks. In: From Statistics to Neural 60.10 S. Amari, N. Kasabov: Brain-like Computing and Networks: Theory and Pattern Recognition Appli- Pr Intelligent Information Systems (Springer, New cations, NATO ASI Series, Vol. 136, ed. by TS18 (NATO, York 1998) TS9 1994) pp. 83–104 60.11 M. Arbib: Brains, Machines and Mathematics 60.30 N. Kasabov: Evolving Connectionist Systems: The (Springer, Berlin 1987) Knowledge Engineering Approach (Springer, Lon- 60.12 M. Arbib (Ed.): The Handbook of Brain Theory and don 2007) Neural Networks (MIT, Cambridge 2003) 60.31 W. Maass, C.M. Bishop (Eds.): Pulsed Neural Net- 60.13 L. Benuskova, N. Kasabov: Towards Computational works (The MIT, Cambridge 1999) Neurogenetic Modelling (Springer, New York 2007), 60.32 N. Kasabov, Q. Song: DENFIS: Dynamic, evolving In print TS10 neural-fuzzy inference systems and its application 60.14 ted G. Carpenter, S. Grossberg, N. Markuzon, J.H. Rey- nolds, D.B. Rosen: Fuzzy ARTMAP: A neural network architecture for incremental supervised learning 60.33 for time-series prediction, IEEE Trans. Fuzzy Syst. 10, 144–154 (2002) N. Kasabov: Evolving fuzzy neural networks for on- of analogue multi-dimensional maps, IEEE Trans. line supervised/unsupervised, knowledge – based Neural Netw. 3(5), 698–713 (1991) learning, SMC B: Cybern. 31(6), 902–918 (2001) 60.15 G. Carpenter, S. Grossberg: Pattern Recognition by 60.34 T. Yamakawa, H. Kusanagi, E. Uchino, T. Miki: Self-Organizing Neural Networks (The MIT, Cam- A new effective algorithm for neo fuzzy neuron bridge, USA 1991) model, Proc. Fifth IFSA World Congr. (IFSA, Seoul, 60.16 N. Kasabov: Foundations of neural networks. In: Korea 1993) pp. 1017–1020 Fuzzy Systems and Knowledge Engineering (The 60.35 L.A. Zadeh: Fuzzy Sets, Inf. Control. 8, 338–353 Part A 60 MIT, MA 1996) (1965) c 60.17 T. Kohonen: Self-Organizing Maps (Springer, Cam- 60.36 Q. Song, N. Kasabov: TWNFI – transductive neu- bridge 1997) ral-fuzzy inference system with weighted data 60.18 E. Rolls, A. Treves: Neural Networks and Brain Func- normalization and its application in medicine, IEEE Trans. Fuzzy Syst. TS12 , TS13 (2005) rre tion (Oxford Univ. Press, Oxford 1998) 60.19 F. Rosenblatt: Principles of Neurodynamics (Spar- 60.37 W. Gerstner, W.M. Kistler: Spiking Neuron Models tan Books, New York 1962) (Cambridge Univ. Press, Cambridge 2002) 60.20 D.E. Rumelhart, G.E. Hinton, R.J. Williams (Eds.): 60.38 A. Destexhe: Spike-and-wave oscillations based Learning Internal Representations by Error Prop- on the properties of GABAB receptors, J. Neurosci. agation, Parallel Distrib, Processing: Explorations 18, 9099–9111 (1998) in the Microstructure of Cognition (MIT/Bradford 60.39 S. Wysoski, L. Benuskova, N. Kasabov: On-line Books, Cambridge 1986) learning with structural adaptation in a network 60.21 G.A. Rummery, M. Niranjan: On-line Q-learning of spiking neurons for visual pattern recognition, Using Connectionist System (Cambridge Univ. Press, Artificial Neural Networks – ICANN 2006 4131, 61–70 co Cambridge 1994), 166 pp., CUED/F-INENG/TR (2006) 60.22 S. Schaal, C. Atkeson: Constructive incremental 60.40 C. Brown, M. Shreiber, B. Chapman, G. Jacobs: In- learning from only local information, Neural Com- formation science and bioinformatics. In: Future put. 10, 2047–2084 (1998) Directions of Intelligent Systems and Information SPIN: 12742608 (Springer Handbook of Bio-/Neuroinformatics) 60.23 S. Grossberg: Studies of Mind and Brain (Reidel, Sciences, ed. by N. Kasabov (Physica, Heidelberg Boston 1982) 2000) pp. 251–287 60.24 J.G. Taylor: The Race for Consciousness (MIT, Cam- 60.41 D.S. Dimitrov, I. Sidorov, N. Kasabov: Compu- bridge 1999) tational biology. In: Handbook of Theoretical Un 60.25 N. Kasabov: Evolving fuzzy neural networks – algo- and Computational Nanotechnology, Vol. 1, ed. by rithms, applications and biological motivation. In: M. Rieth, W. Sommers (American Scientific, TS9 Methodologies for the Conception, Design and Ap- 2004), Chap. 21 plication of Soft Computing, ed. by T. Yamakawa, 60.42 Z. Chan, N. Kasabov, L. Collins: A two-stage G. Matsumoto (World Scientific, TS9 1998) pp. 271– methodology for gene regulatory network ex- Created on: 17 April 2013 10:50 CET 274 traction from time-course gene expression data, 60.26 B. Fritzke: A growing neural gas network learns Expert Syst. Appl. 30(1), 59–63 (2006) topologies, Adv. Neural Inf. Process. Syst. 7, 625– 60.43 H. Chin, S. Moldin (Eds.): Methods in Genomic Neu- 632 (1995) roscience (CRC, TS9 2001) 60.27 J. Platt: A resource allocating network for function 60.44 N. Kasabov, S.H. Chan, V. Jain, I. Sidirov, S.D. Dim- MS ID: hb23-060 Proof 1 interpolation, Neural Comput. 3, 213–225 (1991) itrov: Gene Regulatory Network Discovery from 60.28 J. Freeman, D. Saad: On-line learning in radial ba- Time-Series Gene Expression Data – A Computa- sis function networks, Neural Comput. 9(7), TS13 tional Intelligence Approach, LNCS 3316, 1344–1353 (1997) (2004) TS10 Please update. TS18 Please supply the editor(s). TS12 Please provide the volume number. TS13 Please supply the page range. TS15 Please supply more information. Editor’s or typesetter’s annotations (will be removed before the final TeX run) Index entries on this page f oo 16 Part A Understanding Information Processes in Biological Systems 60.45 http://alleninstitute.org TS15 (IEEE Computer Society, Washington, DC, USA 2005) 60.46 http://www.kedri.info TS15 pp. 308–312 60.47 W. Freeman: Neurodynamics (Springer, London 60.63 G.K. Venayagamoorthy, S. Gaurav: Quantum- Pr 2000) inspired evolutionary algorithms and binary par- 60.48 R.P. Feynman, R.B. Leighton, M. Sands: The ticle swarm optimization for training MLP and SRN Feynman Lectures on Physics (Addison-Wesley neural networks, J. Theor. Comput. Nanosci. TS12 , Publishing Company, Massachusetts 1965) TS13 (2006) 60.49 T. Hey: Quantum computing: An introduction. In, 60.64 N. Kouda, N. Matsui, H. Nishimura, F. Peper: Qu-bit Comput. Control Eng. J. 10(3), 105–112 (1999) neural network and its learning efficiency, Neural 60.50 T. Hogg, D. Portnov: Quantum optimization, Inf. Comput. Appl. 14, 114–121 (2005) Sci. 128, 181–197 (2000) 60.65 D. Ventura, T. Martinez: Quantum associative 60.51 J.-S. Jang, K.-H. Han, J.-H. Kim: Quantum- memory, Inf. Sci. Inf. Comput. Sci. 124, 273–296 60.52 ted inspired evolutionary algorithm-based face veri- fication, LNCS TS12 , 2147–2156 (2003) S.C. Kak: Quantum Neural Computation, Research 60.66 (2000) D. Ventura: Implementing competitive learning in a quantum system. In, Proc. Int. Jt. Conf. Neural Report (Louisiana State Univ., Baton Rouge TS16 ) Netw. (IEEE 1999) 60.53 M. Brooks: Quantum Computing and Communica- 60.67 G. Xie, Z. Zhuang: A quantum competitive learning tions (Springer, Berlin, Heidelberg 1999) algorithm, Liangzi Dianzi Xuebao/Chin. J. Quantum 60.54 L.K. Grover: A fast quantum mechanical algorithm Electron. (China) 20, 42–46 (2003) for database search, STOC ’96: Proc. Twenty-Eighth 60.68 TS17 C.P. Williams, S.H. Clearwater: Explorations in Ann. ACM Symp. Theory Comput. (ACM, New York, Quantum Computing (Springer, Berlin, Heidelberg USA 1996) pp. 212–219 1998) Part A 60 60.55 K.-H. Han, J.-H. Kim: Quantum-inspired evo- 60.69 TS17 S. Grossberg: On learning and energy – en- c lutionary algorithm for a class of combinatorial tropy dependence in recurrent and nonrecurrent optimization, IEEE Trans. Evol. Comput. TS12 , 580– signed networks, J. Stat. Phys. 1, 319–350 (1969) 593 (2002) 60.70 TS17 S. Haykin: Neural Networks – A Comprehen- rre 60.56 G.E. Hinton: Connectionist learning procedures, sive Foundation (Prentice Hall, Engelwood Cliffs, NJ Artif. Intell. 40, 185–234 (1989) 1994) 60.57 M.A. Perkowski: Multiple-valued quantum circuits 60.71 TS17 D. Hebb: The Organization of Behavior (John and research challenges for logic design and com- Wiley, New York 1949) putational intelligence communities, IEEE Comput. 60.72 TS17 T.M. Heskes, B. Kappen: On-line learning Intell. Soc. Mag. TS12 , TS13 (2005) processes in artificial neural networks. In: Mathe- 60.58 P.W. Shor: Polynomial-time algorithms for prime matic Foundations of Neural Networks, ed. by TS18 factorization and discrete logarithms on a quan- (Elsevier, Amsterdam 1993) pp. 199–233 tum computer, SIAM J. Comput. 26, 1484–1509 60.73 TS17 J. Moody, C. Darken: Fast learning in networks (1997) of locally-tuned processing units, Neural Comput. co 60.59 L. Spector: Automatic Quantum Computer Pro- 1, 281–294 (1989) gramming: A Genetic Programming Approach 60.74 TS17 K. Pribram: Rethinking neural networks: (Kluwer Academic, TS9 2004) Quantum fields and biological data, Proc. First Ap- 60.60 J. Liu, W. Xu, J. Sun: Quantum-Behaved Particle palach. Conf. Behav. Neurodyn. (Lawrence Erlbaum SPIN: 12742608 (Springer Handbook of Bio-/Neuroinformatics) Swarm Optimization with Mutation Operator, 17th Associates, Hillsdate New Yersy 1993) IEEE Int. Conf. Tools Artif. Intell. (ICTAI’05) (2005) 60.75 TS17 G. Resconi, A.J. van Der Wal: A data model for 60.61 C.A. Trugenberger: Quantum pattern recognition, the morphogenetic neuron, Int. J. Gen. Syst. 29(1), Quantum Inf. Process. 1, 471–493 (2002) 141–149 (2000) Un 60.62 X.-Y. Tsai, H.-C. Huang, S.-J. Chuang: Quantum 60.76 TS17 G. Resconi, G.J. Klir, E. Pessa: Conceptual NN vs. NN in signal recognition, in: ICITA’05, Proc. foundations of quantum mechanics the role of ev- Third Int. Conf. Inf. Technol. Appl. (ICITA’05), Vol. 2 idence theory, quantum sets and modal logic, Int. J. Mod. Phys. C 10(1), 29–62 (1999) 60.77 TS17 V. Vapnik: Statistical Learning Theory (Wiley, Created on: 17 April 2013 10:50 CET TS9 1998) Acknowledgements The work presented in this chapter was partially sup- ported by grant AUTX0201 funded by the Foundation MS ID: hb23-060 Proof 1 of Research Science and Technology of New Zealand and also by the Knowledge Engineering and Discov- ery Research Institute KEDRI (http://www.kedri.info), Auckland University of Technology. TS15 Please supply more information. TS16 Please supply the year of publication. TS17 This reference is not cited in the text. TS18 Please supply the editor(s). Editor’s or typesetter’s annotations (will be removed before the final TeX run)
US