Talk:History of artificial intelligence/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

"both" or "either"

Under section Experimental AI research, the last sentance of the second paragraph reads: "However, it has become clear that contemporary methods using both broad approaches have severe limitations." This is unclear to me. I think that "either" is meant for "both", but before rewriting, this pre-bold author-respecting newbie thought i'd ask. If it's meant to be "both", i feel this should be explained in the article. Thx, "alyosha" 22:55, 15 December 2005 (UTC)

5th Generation

This article omits what is probably the most important series of events in AI during the 80's and 90's. The creation of the Japanese 5th Generation Project, whose aim was to produce computers "as smart as a man" programmed in Prolog, with their computing performance measured in LIPS (Logical Inferences per Second). The US started a project to compete with this, headed by Navy Admiral Bobby Inman. Both projects tanked horribly, and the AI community totally abandoned attempts to mimic human thought, and re-invented itself as researchers in expert systems and fuzzy logic. Marvin Minsky is extremely critical of what is called AI today, with its robot wars, and Roomba, the intelligent vacuum cleaner.

Deep Blue was not an artifical intelligence exercise, but a demonstration that massively parallel computing with custom ASICs could do by brute force what heuristics and AI failed to accomplish - play chess better than a human.

I strongly suggest someone find a copy of "The Fifth Generation" by Edward A. Feigenbaum and read it. The article currently reads as if Artificial Intelligence has been a rousing success, when in fact, research attempting to produce real artificial intelligence like HAL in 2001 spent billions of dollars, crashed and burned horribly, along with pretty much all of the AI movement. What exists now has much lower expectations, and no longer has any resemblance to the AI of yesteryear.Hermitian 10:59, 10 January 2006 (UTC)

I would support your creating a new subsection (possibly to be expanded into its own article) on AI failures and/or hype. That is certainly a part of its history. However, there is a reason this page seems enthusiastic; it lists only a subset of the succesful projects, and I think / hope people edit down the vain claims of researchers who pretend they have achieved more than they have. The truth is that we have learned a lot about intelligence from trying to build it, so know why the early hopes were ludicrous, and have a better understanding of our own capacities as well now as a result. And we are making progress, even if it is slower & steadier than we initially hoped. I am sure you could find a lot of learned people from just a century ago that wouldn't have believed a machine could do mathematics or play chess, even though now we know recognizing faces is just as hard & wonderful.--Jaibe 17:18, 14 September 2006 (UTC)

Recent DARPA grand challenge

Personally, I think this page could do with a well-written blob about Stanley and the other competitors who finally actually completed the desert race? It seems like something that will be somewhat of a milestone in history.


Query quote

Is this quote true: Douglas Hofstadter, in Gödel, Escher, Bach, pointed out that this moving of the goalposts effectively defines "intelligence" as "whatever humans can do that machines cannot".

Can anybody refer me to the chapter/section? --moxon 20:15, 11 January 2006 (UTC)

linking dates

An editor recently deleted all the date links on this page. Personally, I always thought that date links are silly & wouldn't scale (so I don't mind!), but I just thought people interested in the page might want to weigh in on this change. Personally I'd say the page looks more professional now. But I want to make sure we are following the norms. --Jaibe 17:10, 14 September 2006 (UTC)

Cleanup Tag - References

I added a cleanup tag for the chronologic history, surely we can find some references for these events. Bugone 00:14, 9 March 2007 (UTC)

Steady progress?

I'm not sure that the claim that AI has made "steady progress" can be justified. There have been at least two periods when AI seemed to falter: the so-called "AI Winters" of the late 70s and late 90s.


It seems to me a more neutral statement would be something like this: "Some believe that the quest for artificial intelligence has made steady progress since at least the 1950s[1] while others claim that progress in A.I. been slow or non-existent[2]." But I need to find the references. Dreyfus or Winograd might do for the second case, but they are kind of old now. For the first reference one could use Kurzweil or Moravec


The article should mention the discovery and resolution of major problems in the history of AI: Minsky & Paperts criticism of Perceptrons and how Hopfield and backpropagation resolved it. The discovery of intractability ([1]) and how "scruffy" strategies attempt to address it. The discovery of the "common sense knowledge" problem and how expert systems and CYC use very different strategies to overcome it. The devastating effect of the Lighthill Report and the ALPAC report, the emergence of commercial success in the late 80s and then the most recent 'winter' of the 90s and where they are now at Google and elsewhere. It seems to me these are the central events and would capture some of the boom-and-bust nature of the history of AI. For reference, there is AI: The Tumultuous History of the Search for Artificial Intelligence by Crevier and I can look for a newer one. If I have time I'll look into it. CharlesGillingham 22:20, 10 June 2007 (UTC)

Rewrite in Progress

I am preparing a complete rewrite of this article. It's at User:CharlesGillingham/History of AI. Once it's finished, I will move most of the information on this page to a new "Timeline of artificial intelligence" page, archive this talk page, and replace the entire article. I should be done before the end of this week. CharlesGillingham 21:53, 22 July 2007 (UTC)

A few gaps

I was surprised to see no mention of:

  • Chess. Claude Shannon wrote about it, Mikhail Botvinnik made a little progress in develpoing an algorithm that "thought" like a human player (these articles contain refs). See also Computers and Chess, which may provide WP:RS refs.
  • Computer game AIs.
  • Incorporation of AI-like features into e.g. spam-filters in email programs, many of which respond to "training".

I also remember someone writing (? in the 1990s) that, every time an AI research field looked like it was getting somewhere, the goalposts were moved - e.g. when chess programs became competitive, AI was redefined to exclude them and focus on e.g. visual recognition. -- Philcha (talk) 17:10, 5 October 2008 (UTC)

Keeping in mind that this article is already a bit long, we could cover these topics:
  • Computer chess and computer checkers. A paragraph describing the experiments of Claude Shannon, Christopher Strachey and Arthur Samuel could be added between "Turing's test" and "Logic theorist" sections. This was an early test bed for AI research, especially in the 40s and 50s. McCorduck devotes a chapter to this topic. Crevier does as well. Russell and Norvig mention it. (There was such a paragraph in early drafts of the article, but I cut it for length. I felt that it was less important than logic, natural language, microworlds, connectionism, etc.. I'm open to arguments to the contrary.)
  • Game AI I think this could be mentioned as one of the list of applications in AI behind the scenes. I'm not sure if there is more to say about it than that. Is it more historically influential than that? (I don't have a good reference for this).
  • Spam filtering could be added to list the successful applications mentioned in AI behind the scenes as well. Perhaps like this: "spam filtering (which uses sophisticated machine learning algorithms)"
  • AI effect. This is described in the second paragraph of AI behind the scenes. (The article on this topic was deleted for non-notability. A new draft is at User:CharlesGillingham/AI effect.)
---- CharlesGillingham (talk) 23:53, 5 October 2008 (UTC)
Congratulations on the GA rating!
CharlesGillingham, I've looked at your User:CharlesGillingham/AI effect and all you need to do to prove notability is put a few of the refs inline.
A bit more about Computer game AIs - this is where non-specialist readers may be most familiar with the phrase "AI" (the other is in quasi-science fiction movies). Game AIs illustrate a lot of the difficulties, e.g. they are poor at path-finding (getting a group of units from A to B without traffic jams and without some units wandering off and getting isolated) and at planning (they rely on pre-scripted economic development and attack sequences, and on various "unfair" advantages). WP:RS for this would mostly be at developer mags, e.g. Gamasutra. The best game AIs I'm aware of are Total Annihilation (from personal observation, units act as teams - if they have more firepower than is needed to destroy the designated target, they do a good job of picking a secondary target and deciding who should shoot what) and Galactic Civilizations (praised for its good planning and clever strategies, apparently without cheating).
You might need just one more sentence about the impact of Moore's law for the benefit of non-specialists - e.g. the mid-range desktop on which I'm writing this is over 200 times faster than IBM's top mainframes were in the mid-1970s and has 1000x more RAM, and the first researchers where using 1950s machines with less processing power than a modern digital watch.
The Tesler quote (1970) about "AI" being whatever computers can't do yet would fit well into "AI behind the scenes".
Section "Nouvelle AI" could do with a sentence summmarising 1 or 2 projects based on synthetic animals. -- Philcha (talk) 13:38, 20 October 2008 (UTC)

All this Kurzweil Self-Promotion

I found five references to Ray Kurzweil, who did not contribute anything to Artificial Intelligence! Apparently inserted by Kurzweil himself (or perhaps by somebody in his company's PR department). In the field of AI he certainly is not regarded as an influential researcher, although he writes a lot about the future of AI, elaborating on big ideas introduced by others, such as the technological singularity popularized by Vernor Vinge 20 years ago. I suggest to remove those Kurzweil references, and focus on people who really had an impact on AI history. Quiname (talk) 20:42, 14 April 2011 (UTC)

Kurzweil is used in the text only as an example of current optimism. I think this is appropriate and I can't think of anyone more optimistic or more popular. Kurzweil's Singularity is also used to cite a few things (such as the "AI effect"). This is appropriate, I think, because the book is a popular introduction to AI, and popular introductions (i.e. WP:SECONDARY sources) are good sources for an article such as this, which is partly about AI, but also partly about what people say about AI. ---- CharlesGillingham (talk) 22:56, 14 April 2011 (UTC)

Tone of the Introduction

The tone (WP:TONE, WP:PEACOCK) of this article's introduction is too triumphant and embellished. I don't know enough about the topic to improve it but it could undoubtedly use improving. Exercisephys (talk) 16:45, 25 May 2013 (UTC)

Turing tradition

The article should perhaps mention the Turing tradition which is perhaps less well known on the other side of the pond. The Turing tradition is an approach to machine learning based on "(i) the use of logic and (ii) close attention to practical problems". [2] It is a common theme in the work of Alan Turing, Donald Michie, Ehud Shapiro, Ross Quinlan, Stephen Muggleton. Pgr94 (talk) 08:43, 16 August 2008 (UTC)

The school of thought is certainly influential in the development of Inductive Logic Programming and probably also Abductive Logic Programming. Pgr94 (talk) 09:01, 16 August 2008 (UTC)
If I'm reading this right, this is a tradition (1) popular in England (2) based on logic (3) focussed on machine learning, right? Do you think it's fair to throw this in with other logical approaches to AI in England, i.e. Robert Kowalski at Edinburgh University, etc.? ---- CharlesGillingham (talk) 00:47, 17 August 2008 (UTC)
I agree this should be covered. I've had to dig for more material about England, Europe and Japan. Just one sentence under "Logic" should do it. This may also belong in AI#Traditional symbolic AI#Logical AI and probably in machine learning#History (if someone gets around to writing a History section for that article). The key question is, how influential was it? I'd like to find a second source. ---- CharlesGillingham (talk) 01:01, 17 August 2008 (UTC)
Schools of thought don't respect national boundaries, so "popular in England" is not how I'd describe it. ILP is significant on an international level (although hasn't received much attention in the US for some reason). Michie was a colleague of Turing; Muggleton (principle player in ILP) was his student. The influence of Turing is clearly there. Gillies named it, but perhaps the few refs suggest that the name is not that notable[3] [4] Pgr94 (talk) 11:08, 18 August 2008 (UTC)
A small point perhaps but the UK is not England and Edinburgh is in Scotland, also part of the UK. 17:06, 7 March 2015 (UTC) 86.9.223.140 (talk)

AI used for long duration Space Exploration

Consider developing AI for space exploration. AI would have the ability to make decisions on flight path, speed and modifications of vehicle to enhance speed, data transmission to point of origin, data processing and storage among other goals. It could have constant communication with a terra-based AI system for transmission of knowledge and reporting.

Would be best possible method for humans to explore at least the local cluster of stars at 0.5 to 0.9 light speed travel.

unsigned comment added by 70.43.17.173 (talk) 20:47, 29 October 2015 (UTC) TonSerra (talk) 15:59, 30 October 2015 (UTC)

External links modified

Hello fellow Wikipedians,

I have just modified 3 external links on History of artificial intelligence. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 09:49, 3 April 2017 (UTC)

External links modified

Hello fellow Wikipedians,

I have just modified one external link on History of artificial intelligence. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 01:16, 5 November 2017 (UTC)

Do we really need Pamela McCorduck in the 1st paragraph? Or at all?

Looks like promotion to me. — Preceding unsigned comment added by Callmesolis (talkcontribs) 22:12, 9 February 2019 (UTC)

Vague "weasel" language?

One sentence struck me in this article, regarding the vanishing gradient problem. Overcoming this problem was central to the advance of Deep Learning, and Long short-term memory models were the first and still are one of the most widely used models that resolved this critical issue. So the current text "There have been many methods developed to approach this problem, such as Long short-term memory units." seems to play down the important role of these types of networks, while being vague in not listing any other of the "many methods".

Unless someone wants to suggest some other prominent solutions to the vanishing gradient problem that I am not aware of (more modern ones, perhaps?) I would propose revising this section to clarify that LSTMs were a key breakthrough in the field, precisely because they were the first solution to the vanishing gradient problem, and still to date constitute a prevalent network type in modern deep learning research and applications. — Preceding unsigned comment added by 62.178.202.229 (talk) 14:19, 13 July 2019 (UTC)

Expansion / Reorganization of "Precursors"

I plan on restructuring the "precursors" section by subdividing it into two sub-sections: "Mythical, Fictional, and Speculative" and "Theoretical and Technological." The bulk of what is currently in the precursors section will go under the second heading. I plan on expanding upon the first heading significantly. (Not more than ~500 words.) HieronymusBot (talk) 18:17, 15 March 2020 (UTC)

Wrong attribution of ada lovelace quotation

Ada Lovelace is quoted: "might compose elaborate and scientific pieces of music of any degree of complexity or extent", however in the linked source (https://johnrhudson.me.uk/computing/Menabrea_Sketch.pdf) this is only found in the section "notes by the translator". — Preceding unsigned comment added by Cashney (talkcontribs) 11:48, 9 August 2020 (UTC)

No longer up-to-date

This article doesn't cover the last 15 years, when AI (and especially deep learning) have begun to dominate finance, information technology, science, and industry. I don't plan on writing this, but if I did, the topics would be (1) deep learning: big data + fast machines + statistical AI == ginormous success. (2) artificial general intelligence, which has coalesced into a new and interesting subfield, with a very high profile, very high hopes and no real successes. Brain simulation (Numenta, Blue Brain) deserves a mention here. (3) Jaron Lanier and Noam Chomsky's criticisms of deep learning and statistical AI in general: they argue it's not AGI, it's really just statistics. ---- CharlesGillingham (talk) 06:28, 10 September 2016 (UTC)

I realize you wrote this 5 years ago, although it remains mostly reasonable. I'm not sure if the article has been updated in the interim or not. Also, I'm not sure at what point in time we should consider the line of demarcation between historical artificial intelligence and the current full-blooming artificial intelligence of the present in 2021... such as it is. I'm not replying in order to dump on AI despite my last sentence. I think, with the benefit of 5 year's hindsight, I would break out an update of the article to include:
1 machine learning which is a term used synonymously with AI sometimes. For anti-money laundering, financial risk management, fraud/abuse detection in general (e.g. analyzing computer and network log files to surface anomalies and pattern matches), e-commerce, and law enforcement-related surveillance applications, yes, machine learning and the data/database architectures and appliances that are often used to implement them at scale (e.g. Hadoop, Hive, Pig, MongoDB, Kubernetes, Netezza), are successes. As for deep learning (versus ML), maybe, but successes haven't been nearly as well documented.
2 artificial general intelligence ditto, with the addition of Neuralink and maybe others
3 Criticisms of machine learning AND deep learning for not being AGI: yes, but there are subject matter experts that are more credible than Jaron Lanier for that purpose, I believe. (I have no problem with Jaron Lanier as you can read here if you wish; jeez louise, I wrote that back in 2012). Seems better to leave Noam Chomsky out of criticizing anything on the basis of it being merely statistics too.
I *am* curious (as a sometimes statistician, often applier of probability theory IRL) about the specifics of Noam Chomsky's critique of deep learning as just statistics, CharlesGillingham if you wouldn't mind elaborating a bit.--FeralOink (talk) 08:22, 12 July 2021 (UTC)
These points have been added to the article (See the final section of the article). Critiques of AI's current approach are not mentioned, and I think that makes sense. We'll add a paragraph about that after Statistical AI fails, during the next AI winter ;).
I don't think we need to branch the topic of "AI" because, well, we still call it by the same name, and I think the whole point here is to see all the ways AI has tried and failed in the past. The issues that came up years ago are issues AI will need to deal with in the future. ---- CharlesGillingham (talk) 18:51, 12 July 2021 (UTC)

Neats vs scruffies

Every once in a while, during the past 15 years, someone comes through a deletes all references the terms "neat" and "scruffy". So let's talk about it.

I've put the terms back into the article because: (a) "neats vs. scruffies" is genuine, real history: people talked about this (a lot) in the late 70s and 80s. See AAAI conferences and talks and so on. Part of writing history is describing the world as it was, as it described itself. They described themselves this way, so we need to give the reader an insight into how they thought at the time. (b) Russell and Norvig's "victory of the neats" quip provides a nice through-line for the article, and emphasizes the fact that each generation has viewed the field differently as it has evolved. This is also good history writing, and there is simply is no more reliable source on AI than Russell and Norvig.

On the other side, by the late eighties, most people were sick of hearing about it, and sick of the ridiculous ways that people would try to weave the distinction into badly thought out "general theories" of AI or cognition. I remember Robert Wilensky telling me in 1987: "Never read anything that mentions neats vs scruffies or procedural vs. declarative". In short: we realized it was a stupid distinction years later, but that doesn't mean it isn't historically relevant.

If your disagree, please, let me know what the actual problem is. ----CharlesGillingham (talk) 16:44, 5 July 2020 (UTC)

"Neats and scruffies" was not as prevalent in the community as this WIKI entry makes it out to be. It was limited to a small group of admittedly combative researchers, who did not typically describe themselves that way--but did describe their processes in that manner. Utilizing a process does not make one a particular type of researcher or developer, and thus limits the application of the term. The term doesn't show up in any relevant reporting of the day (aka, Freedman, Newquist, Levy, any computer journals). It can be referenced, perhaps to another entry, but to use that term as a linchpin for this entry is completely disproportionate to its use then and now. I would like to see an actual set of citations from the era describing the developers in actual terms such as "XXX is a scruffy" or "XXX and the team of neats." Thanks. TrainTracking1 (talk) 20:31, 2 April 2021 (UTC).
TrainTracking1, here is a reference: Minsky, Marvin L. "Logical versus analogical or symbolic versus connectionist or neat versus scruffy." AI magazine 12.2 (1991): 34-34. [5]. --Hectorpal (talk) 02:13, 1 August 2021 (UTC)
Weighing in. Completely against the isolated use of "neats vs scruffies." It's what the real world calls "inside baseball"--as far as that goes--in that it has no relevance to the greater discussion, and is not at all relative to the understanding of the history of AI. You might has well start talking about McCarthyists vs Minskyites if you want to get into conflict. And by 1991, the terms were already out of whatever fashion they were in. Perhaps it's time for someone to start a "neats vs scruffies" page, which should include the use of the term in other disciplines, as it apparently is not limited to AI.Andreldritch (talk) 22:37, 1 August 2021 (UTC)

There are 74 hits on McCorduck in this article

They are all in the sources, but that still seems like a lot. Better sources added incrementally over time might be good. I am only suggesting, and realize I can do that too. Help out, that is.--FeralOink (talk) 14:55, 8 July 2021 (UTC)

McCorduck wrote the definitive history of AI. There is no better source. The fact that there are so many references to it is an indication that this article uses only the most reliable sources -- no fringe points of view, no random semi-relevant contributions. Just the mainstream, consensus understanding of the history of AI.
I would argue that articles about established academic topics (such as history) are more likely to reliable if they depend on less sources, not more -- or even just one: the most respected mainstream source. The other random sources tend to be about topics that are either unimportant, fringe, or (at worst) self-serving. ---- CharlesGillingham (talk) 19:26, 8 July 2021 (UTC)
Completely disagree about McCorduck and "the definitive history." Her book was published in 1979, well before the commercial development of AI--and its attendant developers and corporate purveyors--was even established (ranging from Symbolics and LISP Machine to Intellicorp and Inference). In fact, her only association with the commercial rise of AI was her work with Feigenbaum, which left them both on the wrong side of the "Fifth Generation" call to arms. To state that the use of her as the most oft-cited source is indicative of nothing . . . other than familiarity with her work by an early Wiki editor on the topic of AI. (That's a fallacy akin to saying the existence of so many yellow cars in NYC is indicative of that being the best choice of colors for cars.) Other writers, like Norvig, Freedman, Newquist, and Crevier wrote about the rise of commercial AI in much more detail than McCorduck--essentially because she barely touched on it at all. To cite her, or any single author, as the creator of the definitive history is simply misguided and biased. And, of course, leans into the bias of using only one predominant source. McCorduck is not the only highly-regarded chronicler of AI, nor should she (or anyone else) be given that title. Let's get some more sources in here. TrainTracking1 (talk) 06:17, 11 July 2021 (UTC)
Arguing in favor of historical articles being best if dependent on a single historical source is antithetical to a balanced point of view (unless for an historical article which occurred, say, prior to 400 B.C. and sources are scarce.) For History of AI, single-sourcing will almost certainly result in WP:UNDUE and WP:NPOV for the article overall. Thank you for the suggestions, TrainTracking1. I will try to find more written by Norvig, Freedman et al. I encourage other editors to do similarly. A great deal has happened in the history of AI that post-dates 1979 (publication of McCorduck's book), but is still part of the history—not the present—of artificial intelligence.--FeralOink (talk) 10:25, 2 August 2021 (UTC)

Please protect the "neats" and "scruffies"

I just read those terms are being deleted. Please protect them. It was very important. To be honest, this is going to come back. "neats" are challenging Deep Learning who are for them the new "scruffies". Just see Judea Pearl:neat to see this in progress. Hectorpal (talk) 16:36, 1 April 2021 (UTC)

Neats and scruffies is only a term used by SOME in the AI community (historically, Schank is the only person to be associated with it). It is also a popular characterisation in many areas of academia, and is rarely used in the actual history of AI--except by those who consider themselves on one side or the other. Labeling diminishes the efforts of various schools and the wide variety of crossover. Suggest leaving the battle to the actual WIKI entry on Neats and Scruffies. Keep history entry about the actual events and not the insider squabbling. TrainTracking1 (talk) 20:22, 2 April 2021 (UTC)
Hectorpal, your link to archetypal neat URL is a 404. Did you mean this neat? --FeralOink (talk) 14:51, 8 July 2021 (UTC)BTW you have a great LinkedIn pic! TrainTracking1, as an uninvolved editor, I would keep the neats and scruffies and improve the sourcing to McDuck please see my comment below. I am tempted to PROD her BLP. This is what it consists of at the moment:

McCorduck grew up in California and attended the University of California, Berkeley, from which she graduated in 1960. McCorduck was invited to contribute to a book of readings on artificial intelligence while a senior at the University of California, Berkeley, in 1960. At the time she did not know what artificial intelligence was. McCorduck lived for more than forty years in New York City with her husband Joseph F. Traub. After her husband's death she moved back to California, where she had grown up. She now lies [sic] in San Francisco.

Word count for "McCorduck" is 74 on this article.--FeralOink (talk) 10:50, 8 July 2021 (UTC)
FeralOink, yes, thanks. Fixed. --Hectorpal (talk) 02:09, 1 August 2021 (UTC)

@TrainTracking1: Absolute incorrect on all counts. It was only used 1975-1985. It is mentioned in the history section leading AI textbook, Russell & Norvig. It is mentioned in both of the most popular and respected histories of AI, Crevier and McCorduck. It wasn't not unique to Schank (at the time, I didn't even know it was Schank who came up with it). It was the topic of talks and symposiums at AAAI. It was addressed in the presidential address of AAAI several times. Papers were written about it (usually including the "procedural/declarative distinction"). Also the rivalry between MIT and Stanford was very real -- each side thought the other was dead wrong.

I agree that, by 1985, everybody was sick of hearing about it. A lot of worthless papers were written about it, papers that were trying to establish some kind of new paradigm for the field or their own "better" definition of AI and so on. These kind of papers are useless, boring and a dime a dozen. People still write these kinds of papers today, e.g. "Defining 'Synthetic Consciousness'". They are just as useless now.

But that doesn't mean that scruffy/neat doesn't raise an interesting question about AI. Is there a simple and elegant "master algorithm" for AGI? Or do we necessarily have to solve a lot of messy unrelated problems? ---- CharlesGillingham (talk) 19:48, 8 July 2021 (UTC)

Forgive, I didn't realize we talked about this a few months ago. Forgive me for restating my position. ---- CharlesGillingham (talk) 19:50, 8 July 2021 (UTC)

Here is a reference on neat vs scruffy. Minsky, Marvin L. "Logical versus analogical or symbolic versus connectionist or neat versus scruffy." AI magazine 12.2 (1991): 34-34. [6]. 473 citations in Google Scholar. The expressions belong in the history of AI.<unsigned comment>

At least one mention of neats and scruffies seems justified, especially since there is an already extant Wikipedia article on the subject in the context of AI. I'll see if or where it might fit in this article. Maybe the above source will be useful, whomever deposited it here.--FeralOink (talk) 10:33, 2 August 2021 (UTC)
I am doing a major re-write and update of neats and scruffies as the sources aren't great, and as such, dates and context are missing in a lot of places. See [my revisions here to date], if anyone is curious.--FeralOink (talk) 14:06, 2 August 2021 (UTC)

Wiki Education Foundation-supported course assignment

This article is or was the subject of a Wiki Education Foundation-supported course assignment. Further details are available on the course page. Student editor(s): VjiaoBlack.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 23:35, 16 January 2022 (UTC)

Google, 2022

In 2022, Google is told to Havel developed an AI that is 158 million times faster than the world's fasterst supercomputer (source: Medium.com) — Preceding unsigned comment added by 151.38.135.105 (talk) 20:39, 22 August 2022 (UTC)

  1. ^ 1
  2. ^ 2