Talk:Neural network (biology)/Archive 2

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Neural network

This article is really messy. It is strongly linked with the article Artificial Neural Network, which is neural networks from a computer scientists point of view. It seems to me that either the Neural network article should be limited to the medical aspects of neural networks, while leaving the Artificial neural network article deal with the computing/algorithmic aspects of the concept of NN. Or we could use the Neural Network article to introduce both the article of Artificial Neural Networks and Biological Neural Networks and leave the details of each of these NN perspectives to their respective articles, i.e. to Artificial Neural Networks and Biological Neural Networks.

Personally, I would vote for the creation of a short Neural Network article that introduces the concept-perspectives of the Artificial Neural Networks and Biological Neural Networks for each of these sub-views of the NN concept seem equally strong: the biology experts attribute as much importance to it as the computing experts.

Strife911 (talk) 13:42, 9 March 2010 (UTC)

I agree also - These three articles are pretty much the same thing with slight variations. NN should in no way have as much detail as it does about ANN Neural Network should cover the cognitive aspects of both ANN and BNN. ANN and BNN should then take their theories and applications and expand upon them. The NN page should only have history, background and a summary of the branches of research and apps. and maybe a little piece saying how "in modern times NN is mostly used to refer to ANN" and so correct the misconception that NN IS ANN" and thats all really.
I have tried to do this in the disambiguation page also - maybe thats the place to start - it took me an hour and a half to get the wording but I think I have it all in there :¬)
Chaosdruid (talk) 03:24, 12 July 2010 (UTC)
This is still confusing and so I started to work on this today. This article is confusing and I don't even think it should exist. There is already an Artificial neural network page, a Biological neural network page, and a page named Biological network. We should delete this page - or orphan some of the material over to the others - and rename Biological neural network to just neural network. I'm glad to see that others were as confused as I was. I'll put in a resolution to remove this page.Thompsma (talk) 18:54, 10 June 2011 (UTC)

Delete

This page is terrible - I'm just going to delete it and suggest that people migrate to Biological network, Artificial neural network, and Neural network. It can't be fixed - it would take too much work to orphan the material and most of it isn't written well. The citations may be useful, but not worth the effort to rescue. Not much activity in here anyway - so I will just go ahead and do this. It needs to be done.Thompsma (talk) 19:01, 10 June 2011 (UTC)

You can't just delete an article. Perhaps you mean a merge proposal, in which much of the text can be left behind? Dicklyon (talk) 19:05, 10 June 2011 (UTC)
I nominated it for a Wikipedia:Criteria for speedy deletion.Thompsma (talk) 19:08, 10 June 2011 (UTC)

Contested deletion

This page should not be speedy deleted because it is sourced and informative. -- cheers, Michael C. Price talk 19:24, 10 June 2011 (UTC)

I read through the text and it is an incoherent list of factoids with no synthesis. It really isn't worth the effort to keep this page and it shouldn't have been created in the first place. Hence, a speedy deletion is in order to fix this mess. It is no wonder people were confused. A bold move is needed. Get rid of it.Thompsma (talk) 19:27, 10 June 2011 (UTC)
Biological neural network has no citations. Some of the citations could be grabbed from here, but I don't think it is worth the effort. Time would be better spent citing the main article Biological neural network - where I elected to have the article moved to neural network.Thompsma (talk) 19:31, 10 June 2011 (UTC)


Either merge and redirect to Artificial neural network, or keep as an overview. The topic is notable, and there are many, many sources on the topic. The only issue appears to be is it distinct enough from Artificial neural network to deserve its own article. --Ronz (talk) 19:36, 10 June 2011 (UTC)

I basically agree with Ronz. There's no need for CSD, or for deletion at all. Merging is the right way to go. Please see Help:Merge if you need information on how that works. If there is content that you feel should just be dumped a redone from scratch, the way to do that is to delete the content within the page, rather than delete the page itself. Page deletion is really for those circumstances where there is no intent to re-create a page with the same title any time soon. --Tryptofish (talk) 19:53, 10 June 2011 (UTC)

I also started a thread in biological neural networks on this topic. There is confusion in the literature on this as well. A decision will have to be reached on how to properly name these pages. The following terms have been used in the literature: neural network models, artificial neural network, neural network, biological neural network, and neural groupings (outdated, but equated with neural network). Neural network can mean either biological neural network or neural network model or artificial neural network. This page does not help to resolve this confusion, but its creation stems from from this ambiguity and synonymous usage in the literature. I agree with the merger idea if there is consensus on the idea. At most this page should be a disambiguation page pointing to biological neural network and artificial neural network (which is a neural network model).Thompsma (talk) 21:02, 10 June 2011 (UTC)

Merger proposal - sorta

Discussion continued from Talk:Biological neural network#Proposal to rename and restart, which please see.

I propose that the body text of Neural network be merged into Artificial neural network and Biological neural network. The content in Artificial neural network and Biological neural network covers the material. The lead can remain to explain that neural network applies to both. Artificial neural network is pretty much synonymous with neural network model - so that would suffice.Thompsma (talk) 22:35, 10 June 2011 (UTC)

The neural network article has 21 inline citations. I'd want to be sure that the sourced material there survives; if it's not well written, it can of course be improved in the merge, but it shouldn't be dropped just because it's badly written or not well organized. Dicklyon (talk) 01:28, 11 June 2011 (UTC)
Agreed...unless anyone objects. I can work on it over the next little while. I recently finished working on ecology and started working on Food web, Biological network and Ecological network, which brought me to this issue.Thompsma (talk) 02:12, 11 June 2011 (UTC)
This discussion was mentioned at talk:WikiProject Neuroscience. Though artificial neural networks are patterned on biological neural networks, I suspect a few more readers will be looking for the artificial topic, rather than the biological. Neural network has been viewed 13,452 times so far this month, Artificial neural network 10,219 times, and Biological neural network 920 times.
Although this isn't infallible, it indicates to me that "artificial" is the primary topic for the purposes of Wikipedia:Disambiguation. I'm in favour of merging worthwhile content from from Neural network into Biological neural network and Artificial neural network, redirecting Neural network to Artificial neural network and putting a prominent hatnote atop each article pointing the reader to the other, per Wikipedia:Disambiguation. Neural network (disambiguation) seems redundant to me; there are only two topics listed there that are commonly referred to as "neural network." --Anthonyhcole (talk) 11:08, 11 June 2011 (UTC)
To people in the field, the phrase "neural networks" carries an implication of either computational or theoretical work. I don't think the focus is strongly enough on artificial networks to justify a simple redirect, but it wouldn't be horrible to do it that way. For example, in the latest issue of Neural Networks (journal), there are five papers on artificial networks and two papers on theoretical analysis of biological networks -- I think those proportions are roughly representative. Looie496 (talk) 16:17, 11 June 2011 (UTC)
What would you prefer to see instead of a redirect, Looie? --Anthonyhcole (talk) 18:08, 11 June 2011 (UTC)

I've really been scratching my head about this one. To me, "neural network" primarily refers to theoretical (as well as some anatomical and neurophysiological) work on biological networks, with artificial networks (as model systems that usually are, deliberately, more reductionist than actual biological systems) a more specialized subtopic of the field. I don't think anyone is likely to think of the phrase "biological neural network" as a search term. People who want to read about such a thing would simply look for "neural network", taking it for granted that it will be biological. So I end up having low enthusiasm for having a page titled "Biological neural network", preferring instead that it be a redirect. I haven't looked critically at the various page contents, but I would tend to agree with Dicklyon that I don't want to see much deleted outright, at least not right away.

So, I'm leaning towards keeping Neural network as an actual article, rather than as a disambiguation page. Biological neural network ought to become a redirect to Neural network. Neural network should be the primary article on the subject, and should include the kind of content associated with Biological neural network in depth, and it should also address artificial networks in summary style. Then Artificial neural network would be the content fork that would address that subject in greater detail. My 2 cents. --Tryptofish (talk) 19:19, 11 June 2011 (UTC)

Another factor to consider is that artificial neural networks may themselves be biological; the current naming scheme implies a dichotomy that may not actually exist. Powers T 20:20, 11 June 2011 (UTC)
That is kind of true, and is a big part of the reason I've been scratching my head about this. --Tryptofish (talk) 20:25, 11 June 2011 (UTC)

As theoretical and practical research and application of artificial neural nets seems to be primary per results indicated above (biological having less than 1/10 the hits of the others), I don't see why you would merge the biological and bioresearch aspects into this article. It would be better to redirect this article to artificial neural nets and then put a hatnote on that article for biological ones, instead of making a non-primary topic primary. 65.94.47.63 (talk) 05:16, 12 June 2011 (UTC)

I think that what the hits show is merely that "biological neural network" is not something that most readers would look for, which is what I said. They don't tell us anything about what the reliable (read: scholarly) sources say about it. For that, we have to look at those sources. And they mostly discuss biological rather than artificial systems (with the caveat that Powers pointed out), simply calling those biological systems "neural networks", and (I think) making them the primary topic. --Tryptofish (talk) 17:59, 12 June 2011 (UTC)
I don't agree with the conclusion reached on number of hits, but I completely agree with your suggestion to: 1) keep Neural network, 2) Biological neural network redirect to Neural network, 3) Neural network should be the primary article including in depth content on Biological neural network and content on artificial networks in summary style, 4) Artificial neural network would be the content fork that would address that subject in greater detail.
There are many reasons why one article may be hit over another. Embedded links to that article could be one reason. Many of of the biological articles that would link to neural networks are inconsistent in quality, whereas the computer orientated direction seems to have more fully completed or linked articles. As a biologist it is difficult to conceptualize how you can have neural - of or pertaining to the nervous system - not immediately linked (conceptually or physically) to the topic of biology. I understand how neural network is more popularly used in the primary literature in reference to computing or models. I like [[User:LtPowers|Powers] comment that both may be biological - certainly they are natural phenomena, and of course one is the extension of the human mind - a biologically evolved organ with pre- post-synaptic nerve fibers that network learning, memory, and emotion into the dentate gyrus in the hippocampus. However, artificial networks do not run through nervous tissues, but the rules that govern them may be in common, which is what inspires network thinking, modelling, and computing.Thompsma (talk) 06:40, 13 June 2011 (UTC)
I'm persuaded by the above two posts, and agree that Neural network should be largely about biological neural networks, with a hatnote directing interesteg readers to Artificial neural network. I still favour deleting the dab page, or redirecting it to the primary topic article, Neural network. --Anthonyhcole (talk) 07:04, 13 June 2011 (UTC)
Is everyone in agreement??Thompsma (talk) 00:03, 14 June 2011 (UTC)
If I understand the plan correctly, then no. I think neural network should either be a disambiguation page or an expanded page that describes both biological neural networks and artificial neural networks. –CWenger (^@) 00:18, 14 June 2011 (UTC)
It sounds as if you agree CWenger, because what you describe: i.e., "an expanded page that describes both biological neural networks and artificial neural networks" is essentially what we are voting for. Neural network would describe both - biological neural networks in expanded detail and artificial neural network in summary style and content fork.Thompsma (talk) 01:18, 14 June 2011 (UTC)
OK, just making sure. One reader mentioned a hatnote for artificial neural network, which implied to me that it would not get equal coverage, which is what I would support on the expanded disambiguation page. –CWenger (^@) 01:28, 14 June 2011 (UTC)

neural net redirects here, should that be retargeted to artificial ones? There is almost nothing biological about that (Google scholar) . 65.94.47.63 (talk) 09:05, 14 June 2011 (UTC)

Except:[1]Thompsma (talk) 18:40, 14 June 2011 (UTC)
Yeah, I think it should stay as is. It is counterintuitive for a shortened name to redirect to somewhere different. –CWenger (^@) 18:45, 14 June 2011 (UTC)
That's 900 from a total of 23000 scholar hits. Compare with "computer" [2] for which there are 4200 hits. [3] another 2800 for "computing". [4] and 2700 for "artificial intelligence". It seems to me atleast, that the primary use of "neural net" is for artificial ones, a hatnote at the artificial article can point to the general article and biological one. 65.94.47.63 (talk) 04:12, 15 June 2011 (UTC)

neural network types

just fyi, no judgment on my part, but I work in dynamical network modeling using physiological neurons (I wrote the Morris-Lecar model article) so I thought I'd offer my perspective.

Artificial neural networks use a very basic modeling structure that can be likened to graph theory in mathematics. You simply have edges and vortices (neurons and their "connections"). And neurons in ANN's generally are either 1 or 0 (they fire or they don't). At one time, they were assumed to be very similar to real neural networks, but real neural network are much more complex. Some people use them in psychology (see Mark Gluck's youtube presentation on "cognitive and computational neuroscience") for topology permutations to assess global behavior of a more generalized network, but the "neurons" tend to represent conceptual functional components of the "mind" rather than actual neurons. I have never used an ANN, but sometimes when I talk to people about my research, they mistake it for ANN, so I've learned a bit about them.

Biological neural network models are still technically artificial, but they're differentiated from ANN's because of their similarity to real neurons. The actual physical and physiological interactions of neurons are what's being modeled. Thus, the Hodgkin-Huxley model and Morris-Lecar model can be used as the unit of a network, replacing a 1 or 0 state with a more complex, continuous, n-dimensional state that describes both the membrane potential and the ion channel states in the neuron's membrane.

Biological neural networks can go further and include dendrite branches, electrical and chemical coupling (both inhibitory and excitatory) using differential equations that describe the evolution of a system based on initial conditions. Such systems can also be dissipative, driven, and nonholonomic using empirical observations of physiological processes.

The Biological neuron model wiki tells the story with the ANN's as the the first BNN's, which was what my understanding was (but that's not cited in the article). Then came the leaky integrate and fire, and eventually models like Hodgkin Huxley and more dynamics-focused models like the FitzHugh-Nagumo model.

Xurtio (talk) 01:24, 20 June 2011 (UTC)

Nicely said and thanks for your help Xurtio! A couple of questions to help me understand. In biology I think we can safely begin with Santiago Ramón y Cajal as the origin of the modern neuron doctrine showing that nervous tissue is not a continuous web but a network (see [5] and [6]). This puts us back to some time between 1888 to 1898 when the theory was being developed. I'm trying to understand your sentence: "The Biological neuron model wiki tells the story with the ANN's as the the first BNN's," - are you saying that the ANN precedes the BNN in its historical context? Do you have a citation for this and can you elaborate? Cajal didn't have computer models, but used arrows to depict his theory that nervous currents along axons had direction and network structure. He based this theory on histology analysis of nervous tissues and the experimental work explaining gated channels did not come until the 1920's. Do we consider Cajal's work "still technically artificial"?Thompsma (talk) 06:33, 20 June 2011 (UTC)
Just an observation based on Xurtio's very insightful comment: this would seem to support the strategy of making the main article be primarily about the biological networks, and treating the ANN page as the spinoff, as opposed to doing it the other way around. --Tryptofish (talk) 18:22, 20 June 2011 (UTC)
I agree Tryptofish and it is safe to say there is a large consensus to do so. At some point in the near future I will get around to this - completing a couple of other projects first.Thompsma (talk) 20:28, 20 June 2011 (UTC)
Can I just say I approve heartily of the resolution to this thread: "This article focuses on the relationship between the two concepts; for detailed coverage of the two different concepts refer to the separate articles: biological neural network and artificial neural network." --Anthonyhcole (talk) 05:22, 5 September 2011 (UTC)

It appears to me that a third type of neural network is missing from this debate, and from any Wikipedia page I've see - Hardware Neural Networks. The artificial networks are described as being models (simulated on standard computer hardware), whilst biological networks are made of biological matter. However, a Google search for "hardware neural network" turns up a large amount of literature on attempts to build these networks in non biological hardware (e.g. non-standard silicon chips) which are distinct from the abstract models covered in the article on artificial networks. Nossac (talk) 20:44, 7 August 2011 (UTC)

I think most people would count those as artificial neural networks, but I agree with you that the article ought to specifically discuss them at least briefly. Looie496 (talk) 15:40, 5 September 2011 (UTC)

look at this by goals, not methods

We study biological systems with individual neuron models (as HH, Morris-Lecar, etc) that can be connected into time-dependent network models. We can also study those models in frequency space which after reduction turns into something functionally identical to an ANN. The sigmoidal "hidden layer" of the standard ANN was structurally inspired by what was going on in biological response functions. Of course, we also can make analog hardware networks, and those similarly can be used either for general PDP algorithms or biological analogy. Finally we can make neuron-slurry drive a simple flight simulator or keep a robot from crashing into walls, which could make the next Roomba rat-brain powered.
The point is that in this young field, there's a lot of crossover of methodology and application. I see no problem, necessarily, in repeating information (as long as it links to a combined main article) between two separate but related articles, but the directed theme to ANN vs BNN (vs NN) articles has not yet been too well defined, IMHO. I think when we talk about NNs, we talk about PDP in general, thus the distinction between biology and engineering is really only in the background, goals, and funding agency of the R&D behind it. SamuelRiv (talk) 04:25, 10 September 2011 (UTC)

So you seriously want to leave your discussion of neural networks with 20 years behind the times technology? I don't get it. Not even one mention of edelmen. Nothing on multidimensional perceptrons. You show one picture of something which comes from the sixties. I thought this was a source for knowledge not arrogant ham fisting and slap downs. Do any of you have working careers in this field are are you all just hacks?Ggiavelli (talk) 02:18, 18 August 2012 (UTC)Ggiavelli (talk) 02:16, 18 August 2012 (UTC)

Wikipedia articles are written by volunteers, not by professionals who are paid to do it. In other words, if our articles are missing information, they depend on people like you to fix the problem. It's certainly useful to point out a problem, and thank you for doing so; but there is no way of forcing anybody to step up and fix it. Regards, Looie496 (talk) 02:42, 18 August 2012 (UTC)

It is not appropriate of you to strike a comment and to state that it must be DISCUSSED IN TALK FIRST because that is NOT how a wiki works. NO user in their right mind would expect that is how a wiki works. It's arrogant. It makes me wonder what limy skeevy types are running the place to have so much hubris to make such a ridiculous statementGgiavelli (talk) 02:46, 18 August 2012 (UTC)

I'm always sorry when a new editor feels put off by their experience here, and it most certainly is not anyone's agenda to drive new participants away. However, although adding balanced information that brings an article up to date is welcome, simply adding a paragraph touting one's own work is a very different thing. I think that many people can understand that concept, accept it, and learn from it, without taking it personally and becoming angry over it. --Tryptofish (talk) 14:11, 18 August 2012 (UTC)

Further reading

This section has grown out of control, so I have moved it out of the article into this talk page. Please see Wikipedia:Further reading and put only entries that are topical, reliable and balanced, and please, keep the section limited in size. "Wikipedia is not a catalogue of all existing works." Please, if you add an entry back into the article, motivate why. Thank you! Lova Falk talk 09:20, 16 November 2012 (UTC)

Yes, good move. A lot of these are outdated now, Cooper & Elbaum or Fukushima, etc. But so is the article... History2007 (talk) 23:38, 19 March 2013 (UTC)
  • Arbib, Michael A. (Ed.) (1995). The Handbook of Brain Theory and Neural Networks.
  • Alspector, U.S. patent 4,874,963 "Neuromorphic learning networks". October 17, 1989.
  • Agre, Philip E. (1997). Computation and Human Experience. Cambridge University Press. ISBN 0-521-38603-9., p. 80
  • Yaneer Bar-Yam (2003). Dynamics of Complex Systems, Chapter 2 (PDF).
  • Yaneer Bar-Yam (2003). Dynamics of Complex Systems, Chapter 3 (PDF).
  • Yaneer Bar-Yam (2005). Making Things Work. See chapter 3.
  • Bertsekas, Dimitri P. (1999). Nonlinear Programming. ISBN 1-886529-00-0.
  • Bertsekas, Dimitri P. & Tsitsiklis, John N. (1996). Neuro-dynamic Programming. ISBN 1-886529-10-8.{{cite book}}: CS1 maint: multiple names: authors list (link)
  • Bhadeshia H. K. D. H. (1992). "Neural Networks in Materials Science" (PDF). ISIJ International. 39 (10): 966–979. doi:10.2355/isijinternational.39.966.
  • Boyd, Stephen & Vandenberghe, Lieven (2004). Convex Optimization.{{cite book}}: CS1 maint: multiple names: authors list (link)
  • Dewdney, A. K. (1997). Yes, We Have No Neutrons: An Eye-Opening Tour through the Twists and Turns of Bad Science. Wiley, 192 pp. ISBN 0-471-10806-5. See chapter 5.
  • Egmont-Petersen, M., de Ridder, D., Handels, H. (2002). "Image processing with neural networks - a review". Pattern Recognition. 35 (10): 2279–2301. doi:10.1016/S0031-3203(01)00178-9.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  • Fukushima, K. (1975). "Cognitron: A Self-Organizing Multilayered Neural Network". Biological Cybernetics. 20 (3–4): 121–136. doi:10.1007/BF00342633. PMID 1203338.
  • Frank, Michael J. (2005). "Dynamic Dopamine Modulation in the Basal Ganglia: A Neurocomputational Account of Cognitive Deficits in Medicated and Non-medicated Parkinsonism". Journal of Cognitive Neuroscience. 17 (1): 51–72. doi:10.1162/0898929052880093. PMID 15701239.
  • Gardner, E.J., & Derrida, B. (1988). "Optimal storage properties of neural network models". Journal of Physics a. 21: 271–284. doi:10.1088/0305-4470/21/1/031.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  • Hadzibeganovic, Tarik & Cannas, Sergio A. (2009). "A Tsallis' statistics based neural network model for novel word learning". Physica A: Statistical Mechanics and its Applications. 388 (5): 732–746. doi:10.1016/j.physa.2008.10.042.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  • Krauth, W., & Mezard, M. (1989). "Storage capacity of memory with binary couplings". Journal de Physique. 50 (20): 3057–3066. doi:10.1051/jphys:0198900500200305700.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  • Maass, W., & Markram, H. (2002). "On the computational power of recurrent circuits of spiking neurons". Journal of Computer and System Sciences. 69 (4): 593–616. doi:10.1016/j.jcss.2004.04.001.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  • MacKay, David (2003). Information Theory, Inference, and Learning Algorithms.
  • Mandic, D. & Chambers, J. (2001). Recurrent Neural Networks for Prediction: Architectures, Learning algorithms and Stability. Wiley. ISBN 0-471-49517-4.{{cite book}}: CS1 maint: multiple names: authors list (link)
  • Minsky, M. & Papert, S. (1969). An Introduction to Computational Geometry. MIT Press. ISBN 0-262-63022-2.{{cite book}}: CS1 maint: multiple names: authors list (link)
  • Muller, P. & Insua, D.R. (1995). "Issues in Bayesian Analysis of Neural Network Models". Neural Computation. 10 (3): 571–592. PMID 9527841.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  • Reilly, D.L., Cooper, L.N. & Elbaum, C. (1982). "A Neural Model for Category Learning". Biological Cybernetics. 45: 35–41. doi:10.1007/BF00387211.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  • Rojas, R. (1996). Neural Networks, A Systematic Introduction. Springer-Verlag.
  • Rosenblatt, F. (1962). Principles of Neurodynamics. Spartan Books.
  • Sun, R. & Bookman,L. (eds.) (1994.). Computational Architectures Integrating Neural and Symbolic Processes. Kluwer Academic Publishers, Needham, MA. {{cite book}}: |author= has generic name (help); Check date values in: |year= (help)CS1 maint: multiple names: authors list (link)
  • Sutton, Richard S. & Barto, Andrew G. (1998). Reinforcement Learning: An introduction.{{cite book}}: CS1 maint: multiple names: authors list (link)
  • Van den Bergh, F. Engelbrecht, AP. "Cooperative Learning in Neural Networks using Particle Swarm Optimizers". CIRG 2000. {{cite journal}}: Cite journal requires |journal= (help)CS1 maint: multiple names: authors list (link)
  • Wilkes, A.L. & Wade, N.J. (1997). "Bain on Neural Networks". Brain and Cognition. 33 (3): 295–305. doi:10.1006/brcg.1997.0869. PMID 9126397.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  • Wasserman, P.D. (1989). Neural computing theory and practice. Van Nostrand Reinhold. ISBN 0-442-20743-3.
  • Jeffrey T. Spooner, Manfredi Maggiore, Raul Ord onez, and Kevin M. Passino, Stable Adaptive Control and Estimation for Nonlinear Systems: Neural and Fuzzy Approximator Techniques, John Wiley and Sons, NY, 2002.
  • Peter Dayan, L.F. Abbott. Theoretical Neuroscience. MIT Press. ISBN 0-262-04199-5.
  • Wulfram Gerstner, Werner Kistler. Spiking Neuron Models:Single Neurons, Populations, Plasticity. Cambridge University Press. ISBN 0-521-89079-9.
  • Steeb, W-H (2008). The Nonlinear Workbook: Chaos, Fractals, Neural Networks, Genetic Algorithms, Gene Expression Programming, Support Vector Machine, Wavelets, Hidden Markov Models, Fuzzy Logic with C++, Java and SymbolicC++ Programs: 4th edition. World Scientific Publishing. ISBN 981-281-852-9.

Recent Successes

The article says: A. K. Dewdney, a former Scientific American columnist, wrote in 1997, "Although neural nets do solve a few toy problems, their powers of computation are so limited that I am surprised anyone takes them seriously as a general problem-solving tool." (Dewdney, p. 82)

This criticism should be contrasted with the incredible recent successes of neural networks. Since 2009, they have won many competitions and outperformed all other machine learning techniques in many applications. Deeper Learning (talk) 15:50, 10 December 2012 (UTC)

Ok, I inserted material that shows that Dewdney's criticism is outdated: In recent years, neural networks have left behind their former reputation as toy problem methods. In particular, between 2009 and 2012, the recurrent neural networks and deep feedforward neural networks developed in the research group of Jürgen Schmidhuber at the Swiss AI Lab IDSIA have won eight international competitions in pattern recognition and machine learning[1]. For example, Alex Graves et al.'s multi-dimensional Long short term memory (LSTM)[2][3] won three competitions in connected handwriting recognition at the 2009 International Conference on Document Analysis and Recognition (ICDAR), without any prior knowledge about the three different languages to be learned. Today, variants of the back-propagation algorithm as well as unsupervised methods by Geoff Hinton and colleagues at the University of Toronto[4][5] can be used to train deep, highly nonlinear neural architectures similar to the 1980 Neocognitron by Kunihiko Fukushima[6] and the "standard architecture of vision"[7], inspired by the simple and complex cells identified by Nobel laureates David H. Hubel & Torsten Wiesel in the visual primary cortex. As of 2012, the state of the art in deep learning feedforward networks alternates convolutional layers and max-pooling layers, topped by several pure classification layers. Since 2011, fast GPU-based implementations of this approach by Dan Ciresan and colleagues at IDSIA have won several pattern recognition contests, including the IJCNN 2011 Traffic Sign Recognition Competition[8], the ISBI 2012 Segmentation of Neuronal Structures in Electron Microscopy Stacks challenge[9], and others. Such neural networks also were the first artificial pattern recognizers to achieve human-competitive or even superhuman performance[10] on important benchmarks such as traffic sign recognition (IJCNN 2012), or the famous MNIST handwritten digits problem of Yann LeCun and colleagues at NYU. Deeper Learning (talk) 17:23, 10 December 2012 (UTC)

Architecture section, quality & focus

I looked on here by chance. The architecture section which should be a key element has 2 confused and unsourced paragraphs now. And that seems to be symptomatic - you can not (and I mean really not) discuss architecture unless you separate the artificial man-made networks from the natural/biological items. So the article really needs to split anyway. Then architecture can be discussed. This article is not on my edit path, so I will not tag it or modify it, but if you people separate it out into artificial vs natural, it may have some hope of improving. But as is, it does not have a prayer and is a liability for the reader rather than an asset. History2007 (talk) 23:46, 19 March 2013 (UTC)

Oh Boy, there are already two pages Artificial neural network and Biological network. So I will take back my split comment. And the page Neural network (disambiguation) exists as well - totally confusing. This should just become a disambig page really. The Artificial neural network page needs a rewrite as someone there suggested. History2007 (talk) 23:55, 19 March 2013 (UTC)
I think you meant Biological_neural_network - but I agree - this is all quite confusing. I think there should be one article addressing artificial neural networks, and in that article have a section on the simulation or study of biological neural networks using the techniques of artificial neural networks. Then, keep the biological neural network article as is and keep good links between the two. Then this page can go away and just become a disambiguation page. --Obi-Wan Kenobi (talk) 10:54, 20 March 2013 (UTC)
I raised the issue on Wikiproject disambiguation, but no consensus seems likely... History2007 (talk) 11:06, 20 March 2013 (UTC)
It's on this talk page that's important - disambig will accept most of the possible outcomes...--Obi-Wan Kenobi (talk) 11:29, 20 March 2013 (UTC)
So, you have my support to do it... So just do it if you like. I will stop watching however... History2007 (talk) 18:00, 20 March 2013 (UTC)

Anyway, I touched it up a little, but still many errors there. The last one I just fixed confused how the Von Neuman model works, did not mention memory separation from processors and how that makes it different, etc. Overall the statements in this article can just generate tears really... History2007 (talk) 15:12, 24 March 2013 (UTC)

Proposed merge with Deep learning

"Deep learning" is little more than a fad term for the current generation of neural nets, and this page describes neural net technology almost exclusively. The page neural network could do with an update from the more recent and better-written material on this page. QVVERTYVS (hm?) 11:12, 4 August 2013 (UTC)

Withdrawn - a merge with artificial neural network seems better. QVVERTYVS (hm?) 11:18, 4 August 2013 (UTC)

Proposed merge with Artificial neural network

This article discusses ANNs, mostly. It makes comparisons with biological NNs, but for the most part, it's duplicate. QVVERTYVS (hm?) 13:39, 15 July 2013 (UTC)

  • Support merge. Arguably, artificial networks and natural ones are entirely different things, but the study of natural ones is so closely tied up with studying artificial models that I think a merge (with this page as the target) makes sense for now. In the future, research developments may justify re-splitting them, but that's WP:CRYSTAL. --Tryptofish (talk) 21:18, 15 July 2013 (UTC)
  • Comment – It looks like a lot of work to either merge these or to separate the content more sensibly. I would support either way, if someone is agreeing to take it on and do a good job of it. Dicklyon (talk) 23:40, 15 July 2013 (UTC)
  • Oppose merger There are at least three different kinds of things under the topic of neural networks: (1) biological networks of neurons and associated cell types, (2) computational approximations to biological neural networks (Hodgkin-Huxley, integrate and fire, etc.), and (3) the artificial neural networks used in machine learning that bear little relation to biological networks (radial basis function NNs, multilayer perceptrons, Gaussian processes, etc.) A fourth related category might be the connectionist models used in psychology and philosophy. Somewhere on WP, we need to be able to classify and explain the differences among these systems. I'd be concerned that a merger would give undue weight to the ANNs and the other types would get lost in ANN discussion. I agree with Dicklyon that it would be a lot of work to merge these well. To me it would make more sense to move most of the ANN stuff from Neural network to Artificial neural network and make Neural network more of a WP:DABCONCEPT page that classifies and disambiguates the types of neural networks. --Mark viking (talk) 18:27, 23 July 2013 (UTC)
    • Yes, that idea about a DAB page would work, too. --Tryptofish (talk) 18:29, 23 July 2013 (UTC)
    • Alright, agreed. I did already move a lot of ANN material to the artificial neural network page, removing it here. A DABCONCEPT page seems fine. QVVERTYVS (hm?) 11:17, 4 August 2013 (UTC)
  • Support merge - A merge makes sense. APerson (talk!) 16:09, 28 August 2013 (UTC)
  • Oppose – I agree with Dicklyon and Mark. Above all, ANN and NN should not be treated as redundant concepts. Mark's suggestion to make NN a DABCONCEPT page is a good idea. Kind regards, (talk) 22:40, 30 August 2013 (UTC)
  • Oppose - While ANN derives from the inspiration of Biological NN, it is not necessarily an attempt to model NN. ANN is primarily a mathematical model, and the model can be implemented in computer hardware and software, without any attempt to reproduce or mimic a NN. Further, ongoing research in the NN and ANN fields is likely to progress in very different areas of research. Jimperrywebcs (talk) 19:53, 12 September 2013 (UTC)
  • Support merge in a more complex manner.
There is a major problem of artificial neural network. It only covers the use in computer science. There are biological neural network that are artificially created. See here for an example: Implanted neurons, grown in the lab, take charge of brain circuitry.
Also, in computer science, the term, neural network, is very established. Major universities use NN instead of ANN as the name of subjects. Here is an example: http://www.cse.unsw.edu.au/~cs9444/
ANN should be renamed to neural network(computer).
So in summary, I propose the below changes
1.move relevant contents of ANN and NN to neural network(computer).
2.change NN to a WP:DABCONCEPT page that points to neural network(computer) and neuron.
3.change ANN to a WP:DABCONCEPT page, or rewrite it. Biological NN that is artificially created should be included.
Science.philosophy.arts (talk) 01:29, 20 September 2013 (UTC)
I like your idea of a page on Neural network (computer), or perhaps better Neural network (digital). However, I disagree with relegating the biological side of the topic to Neuron, which is principally about individual cells. Perhaps, instead, there should be a dedicated page called Neural network (biological). We might be able to get by with those two pages, each with a hatnote to serve for DAB, and in that case, we might not need a DAB page. --Tryptofish (talk) 22:28, 20 September 2013 (UTC)
Searching keywords 'neural networks computer' in Google returns more results than 'neural networks digital'. Neural network (computer) is more common.
Can anyone write neural network (biological) soon? Feasibility of the proposals should always be considered. Neural network (biological) can be a short paragraph of Neuron for the moment.
Science.philosophy.arts (talk) 23:26, 20 September 2013 (UTC)
Whatever else, it should not be Neuron, which, by definition, is not about networks. Neural coding would be better. --Tryptofish (talk) 21:45, 21 September 2013 (UTC)
  • Since there was such a mess and I was bold and redirected NN to the NN Dab. Almost all the content of NN was already in ANN so it didn't make any sense to have both. We can rename ANN and BNN to "Neural Network (Biology)" and "Neural Network (Computer Science)" if you want. We can make NN a dab instead of having Neural network (disambiguation).--Diaa abdelmoneim (talk) 20:42, 28 September 2013 (UTC)

My revert

I just reverted a pile of edits that I thought mostly made things worse, starting off with several statements attributed to this source that I can't identify: [11] Please identify the source better (an author perhaps?) so I can see if there's a version accessible some place that doesn't require the university login, and also quote to us what it says, and these statements attributed to it seem odd.

Statements attributed to this source also seem odd; some quotes mights be of interest, or a more serious source should perhaps be used: [12]. I found it online and don't see support for "impossible to come up with a function by hand" or "the threshold value will change to make the network as accurate as possible". I rolled back further, finding no support in that source for "When some neurons are not working, the network still functions overall making them reliable." Subsequent edits were also variously suspect and hard to interpret. I can go into detail on request. Dicklyon (talk) 21:41, 8 December 2018 (UTC)

@Jtumina: – I just noticed you're a student working on an assignment. Please engage here if you want my help in making your edits more correct and constructive. Dicklyon (talk) 21:58, 8 December 2018 (UTC)

References

  1. ^ http://www.kurzweilai.net/how-bio-inspired-deep-learning-keeps-winning-competitions 2012 Kurzweil AI Interview with Jürgen Schmidhuber on the eight competitions won by his Deep Learning team 2009-2012
  2. ^ Graves, Alex; and Schmidhuber, Jürgen; Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks, in Bengio, Yoshua; Schuurmans, Dale; Lafferty, John; Williams, Chris K. I.; and Culotta, Aron (eds.), Advances in Neural Information Processing Systems 22 (NIPS'22), December 7th–10th, 2009, Vancouver, BC, Neural Information Processing Systems (NIPS) Foundation, 2009, pp. 545–552
  3. ^ A. Graves, M. Liwicki, S. Fernandez, R. Bertolami, H. Bunke, J. Schmidhuber. A Novel Connectionist System for Improved Unconstrained Handwriting Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 5, 2009.
  4. ^ http://www.scholarpedia.org/article/Deep_belief_networks /
  5. ^ Hinton, G. E.; Osindero, S.; Teh, Y. (2006). "A fast learning algorithm for deep belief nets" (PDF). Neural Computation. 18 (7): 1527–1554. doi:10.1162/neco.2006.18.7.1527. PMID 16764513.
  6. ^ K. Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36(4): 93-202, 1980.
  7. ^ M Riesenhuber, T Poggio. Hierarchical models of object recognition in cortex. Nature neuroscience, 1999.
  8. ^ D. C. Ciresan, U. Meier, J. Masci, J. Schmidhuber. Multi-Column Deep Neural Network for Traffic Sign Classification. Neural Networks, 2012.
  9. ^ D. Ciresan, A. Giusti, L. Gambardella, J. Schmidhuber. Deep Neural Networks Segment Neuronal Membranes in Electron Microscopy Images. In Advances in Neural Information Processing Systems (NIPS 2012), Lake Tahoe, 2012.
  10. ^ D. C. Ciresan, U. Meier, J. Schmidhuber. Multi-column Deep Neural Networks for Image Classification. IEEE Conf. on Computer Vision and Pattern Recognition CVPR 2012.
  11. ^ "Adaptive algorithms". Machine Design. 69: 144. Apr 17, 1997 – via ProQuest Central.
  12. ^ Krogh, Anders (1 February 2008). "What are artificial neural networks?". Nature Biotechnology. 26 (2): 195–197. doi:10.1038/nbt1386. ISSN 1087-0156.

Merge?

I think that a merge would be a bad idea. It sounds like this has been thoroughly discussed before. North8000 (talk) 19:21, 14 May 2019 (UTC)

@North8000: It might be better to post this instead at Talk:Artificial neural network, where the merge discussion is. Otherwise, the person closing the merge discussion won't see it. Thanks! --Tryptofish (talk) 22:25, 14 May 2019 (UTC)

Under criticisms the "Hard disk" reference was obsolete. Any discomfort with the change to storage capacity?

Under criticism: ..."— which can consume vast amounts of computer memory and hard disk space." A hard disk is just one example of many of a data storage device, and a fading one, especially in high performance data processing applications such as most AI related work. It's never appropriate to describe one particular type of data storage device when referring to a data storage device in catagorical terms anyway, as in the sentence above.

So I'll revise to: ..."— which can consume vast amounts of computer memory and data storage capacity." If any discomfort please advise. Cheers! --H Bruce Campbell (talk) 19:24, 26 February 2021 (UTC)