Sale!
Placeholder

A Biologically New Learning Paradigm Implemented In An Artificial Neural Network

10,000 3,000

Topic Description

 ALL listed project topics on our website are complete material from chapter 1-5 which are well supervised and approved by lecturers who are intellectual in their various fields of discipline, documented to assist you with complete, quality and well organized researched materials. which should be use as reference or Guild line...  See frequently asked questions and answeres



Summary
Shahaf and Marom (2001) have provided evidence that a biological neural network of cultured cortical
neurons from baby rats are able to learn. These biological networks have a multitude of connections,
stability in connections, and are modifiable by external stimulus. All of this has been shown before
however. What Shahaf and Marom have done that is groundbreaking is provide evidence that learning can be achieved in the absence of a separate neural reward mechanism. By stimulating the network until a desired response is achieved and then removing the stimulation, the network is able to locate and stabilise upon selected neurons using the point at which the external stimulation is removed as a guide to which the correct neuron is.
This project seeks to reproduce these findings in an artificial environment. A recurrent neural network is
chosen that consists of stochastic neurons connected to an appropriate number of neurons. Long-term
potentiation and long-term depression are identified as biologically found mechanisms responsible for
learning in the hippocampus and cortex and an artificial version of each is implemented within the
network. This is done in C++.
Several protocols are defined that allow various networks to be tested in the same way the biological
networks are tested in Shahaf and Marom (2001). Successful learning is found for simple and selective
learning and pathways are formed from input neurons to neurons of interest. Failure in the control trials
however result in the conclusion that the model fails to reproduce the findings of Shahaf and Marom
fully. The model misses the important reinforcer that is found in learning. Reasons for this discrepancy
are discussed and possible avenues of successful development noted.
Despite this failing, the project produces an artificial neural network that has nearly all of the
features required for a fully successful reproduction of the Shahaf and Marom study, including
spontaneous background activity and artificial memory degradation.

Table of Contents
1 Introduction…………………………………………………………………………………………………………………… 1
1.1 Chapter Summary……………………………………………………………………………………………………….. 1
1.2 Aim of the project ………………………………………………………………………………………………………. 1
1.3 The fields concerned – Psychologists, Computer Scientists, Philosophers, and many many more. 2
1.4 What is a Cognitive Scientist doing here? ……………………………………………………………………….. 3
1.5 What is learning?………………………………………………………………………………………………………… 4
1.6 Shahaf and Marom (2001) – overview…………………………………………………………………………….. 6
1.7 Model criteria…………………………………………………………………………………………………………….. 8
1.8 Biological neural networks and learning – a brief terminological note………………………………….. 9
1.9 Motivation…………………………………………………………………………………………………………………. 9
1.10 Milestones…………………………………………………………………………………………………………………10
2 Background…………………………………………………………………………………………………………………..11
2.1 Chapter Summary……………………………………………………………………………………………………….11
2.2 Some basics – the neuron and the synapse ……………………………………………………………………….11
2.3 Some basics – the McCulloch and Pitts neuron…………………………………………………………………13
2.4 An extension – Stochastic neurons…………………………………………………………………………………15
2.5 What mechanisms are thought to be responsible for learning?…………………………………………….16
2.6 Hebbian Learning……………………………………………………………………………………………………….17
2.7 Long-term Potentiation – Long-term Depression ……………………………………………………………..17
2.8 Why Shahaf and Marom is both novel and groundbreaking………………………………………………..20
3 Preliminary Work…………………………………………………………………………………………………………..21
3.1 Chapter Summary……………………………………………………………………………………………………….21
3.2 Evaluation of feedforward networks and backpropagation …………………………………………………21
3.3 Evaluation of Hopfield networks …………………………………………………………………………………..23
3.4 Evaluation of recurrent networks …………………………………………………………………………………..24
3.5 A genetic algorithm’s suitability for the model ………………………………………………………………..25
4 Methodology and Implementation …………………………………………………………………………………….27
4.1 Chapter Summary……………………………………………………………………………………………………….27
4.2 Network used for the model………………………………………………………………………………………….27
4.3 Update rule………………………………………………………………………………………………………………..28
4.4 Long-term potentiation criteria and methods……………………………………………………………………28
iv
4.5 Long-term depression criteria and methods……………………………………………………………………..29
4.6 Spontaneous Background Activity…………………………………………………………………………………29
4.7 Memory Degradation…………………………………………………………………………………………………..30
4.8 Model supervision – simulation protocols……………………………………………………………………….30
4.8.1 Selection of neuron of interest and neuron for control……………………………………………….31
4.8.2 Simple Learning…………………………………………………………………………………………………31
4.8.3 Selective learning……………………………………………………………………………………………….32
4.8.4 Control Trials…………………………………………………………………………………………………….32
4.9 Bioinspired Evolutionary Agent Simulation Toolkit (BEAST) ……………………………………………32
4.9.1 Why BEAST is chosen………………………………………………………………………………………..33
4.9.2 What BEAST lacks – required additions to code provided…………………………………………33
4.10 Visualisation of results ………………………………………………………………………………………………..34
4.11 Evaluation of learning in the model ……………………………………………………………………………….34
5 The Results……………………………………………………………………………………………………………………36
5.1 Chapter Summary……………………………………………………………………………………………………….36
5.2 Simple Learning Results………………………………………………………………………………………………37
5.3 Control Simulation ……………………………………………………………………………………………………..37
5.4 Selective Learning………………………………………………………………………………………………………38
5.5 Other Simulations – Full Connectivity…………………………………………………………………………….39
5.6 Other Simulation – One Neuron Stimulated Instead of Two……………………………………………….39
5.7 Simulations Involving Increased Neuronal Connections…………………………………………………….39
5.8 A Larger Network – 144 Neurons………………………………………………………………………………….40
5.9 Results of the most biologically plausible model………………………………………………………………40
5.10 Weight Distributions – Before and After Simulations ……………………………………………………….41
6 Discussion…………………………………………………………………………………………………………………….42
6.1 Chapter summary ……………………………………………………………………………………………………….42
6.2 How the model produces learning………………………………………………………………………………….42
6.3 Does scaling down the size of a network matter? ……………………………………………………………..44
6.4 Minimal Model Evaluation…………………………………………………………………………………………..44
6.5 One neuron instead of two ……………………………………………………………………………………………47
6.6 Full connectivity…………………………………………………………………………………………………………47
6.7 More biological – increased connections…………………………………………………………………………47
6.8 Larger network…………………………………………………………………………………………………………..47
6.9 More biological – Learning rule dependence ……………………………………………………………………48
6.10 Comparison of final model to that of Shahaf and Marom…………………………………………………..48
6.11 Can the model work in the future? …………………………………………………………………………………50
v
References ……………………………………………………………………………………………………………………………51
Appendix A – Project Experience …………………………………………………………………………………………….56
Appendix B – Project Evaluation and Changes……………………………………………………………………………57
1. Meeting the minimum aims ………………………………………………………………………………………….57
2. Bettering the minimum aims…………………………………………………………………………………………57
3. Rejection of raster plots ……………………………………………………………………………………………….58
Appendix C – Definitions and abbreviations ………………………………………………………………………………60
Appendix D – How BEAST is adapted for the model……………………………………………………………………

GET COMPLETE MATERIAL

INQUIRES:

OUR SERVICES: