1957: When Machines that Think, Learn, and Create Arrived

1957: When Machines that Think, Learn, and Create Arrived

Timothy Taylor 29/04/2020 6
1957: When Machines that Think, Learn, and Create Arrived

Herbert Simon and Allen Newell were pioneers in artificial intelligence: that is, they were among the first to think about the issues involved in designing computers that were not just extremely fast at doing the calculations for well-structured problems, but in designing computers that could learn from their own mistakes and teach themselves to do better. Simon and Newell shared the Turing prize, sometimes referred to as thfe "Nobel prize in computing" in 1975, and Simon won the Nobel prize in economics in 1978.

Back in 1957, Simon and Newell made some strong claims about the near-term future of these new steps in computing technology. In a speech co-authored by both, but delivered by Simon, he said:

[T]he simplest way I can summarize the situation is to say that there are now in the world machines that think, that learn, and that create. Moreover, their ability to do these things is going to increase rapidly until in a visible future--the range of problems they can handle will be coextensive with the range to which the human mind has been applied.

The lecture was published in Operations Research, January-February 1958, under the title "Heuristic Problem Solving: The Next Advance Operations Research" (pp. 1-10).  Re-reading the lecture today, one is struck by the extreme changes that these extremely well-informed authors expected to occur within a horizon of about 10 years. However, about 60 years later, despite extraordinary changes in computing technology, software, and information technology more broadly, we are still some distance from the future that Simon and Newell predicted. Here's an additional flavor of the Simon and Newell argument from 1957.

Here is their admission that up to 1957, computing power and operations research had focused mainly on well-structured problems:

In short, well-structured problems are those that can be formulated explicitly and quantitatively, and that can then be solved by known and feasible computational techniques. ... Problems are ill-structured when they are not well-structured. In some cases, for example, the essential variables are not numerical at all, but symbolic or verbal. An executive who is drafting a sick-leave policy is searching for words, not numbers. Second, there are many important situations in everyday life where the objective function, the goal, is vague and nonquantitative. How, for example, do we evaluate the quality of an educational system or the effectiveness of a public relations department?' Third, there are many practical problems--it would be accurate to say 'most practical problems'--for which computational algorithms simply are not available.

If we face the facts of organizational life, we are forced to admit that the majority of decisions that executives face every day and certainly a majority of the very most important decisions lie much closer to the ill-structured than to the well-structured end of the spectrum. And yet, operations research and management science, for all their solid contributions to management, have not yet made much headway in the area of ill-structured problems. These are still almost exclusively the province of the experienced manager with his 'judgment and intuition.' The basic decisions about the design of organization structures are still made by judgment rather than science; business policy at top-management levels is still more often a matter of hunch than of calculation. Operations research has had more to do with the factory manager and the production-scheduling clerk than it has with the vice-president and the Board of Directors.

But by 1957, the ability to solve ill-structured problems had nearly arrived, they wrote:

Even while operations research is solving well-structured problems, fundamental research is dissolving the mystery of how humans solve ill-structured problems. Moreover, we have begun to learn how to use computers to solve these problems, where we do not have systematic and efficient computational algorithms. And we now know, at least in a limited area, not only how to program computers to perform such problem-solving activities successfully; we know also how to program computers to learn to do these things.

In short, we now have the elements of a theory of heuristic (as contrasted with algorithmic) problem solving; and we can use this theory both to understand human heuristic processes and to simulate such processes with digital computers. Intuition, insight, and learning are no longer exclusive possessions of humans: any large high-speed computer can be programmed to exhibit them also.

I cannot give here the detailed evidence on which these assertions--and very strong assertions they are--are based. I must warn you that examples of successful computer programs for heuristic problem solving are still very few, One pioneering effort was a program written by O.G. Selfridge and G. P. Dinneen that permitted a computer to learn to distinguish between figures representing the letter 0 and figures representing A presented to it 'visually.' The program that has been described most completely in the literature gives a computer the ability to discover proofs for mathematical theorems--not to verify proofs, it should be noted, for a simple algorithm could be devised for that, but to perform the 'creative' and 'intuitive' activities of a scientist seeking the proof of a theorem. The program is also being used to predict the behavior of humans when solving such problems. This program is the product of work carried on jointly at the Carnegie Institute of Technology and the Rand Corporation, by Allen Newell, J. C. Shaw, and myself. 

A number of investigations in the same general direction-involving such human activities as language translation, chess playing, engineering design, musical composition, and pattern recognition are under way at other research centers. At least one computer now designs small standard electric motors (from customer specifications to the final design) for a manufacturing concern, one plays a pretty fair game of checkers, and several others know the rudiments of chess. The ILLIAC, at the University of Illinois, composes music, using I believe, the counterpoint of Palestrina; and I am told by a competent judge that the resulting product is aesthetically interesting.

So where would what we now call "artificial intelligence" be in 10 years?

On the basis of these developments, and the speed with which research in this field is progressing, I am willing to make the following predictions, to be realized within the next ten years: 

1. That within ten years a digital computer will be the world's chess champion, unless the rules bar it from competition.

2. That within ten years a digital computer will discover and prove an important new mathematical theorem.

3. That within ten years a digital computer will write music that will be accepted by critics as possessing considerable aesthetic value.

4. That within ten years most theories in psychology will take the form of computer programs, or of qualitative statements.

It is not my aim to surprise or shock you if indeed that were possible in an age of nuclear fission and prospective interplanetary travel. But the simplest way I can summarize the situation is to say that there are now in the world machines that think, that learn, and that create. Moreover, their ability to do these things is going to increase rapidly until in a visible future the range of problems they can handle will be coextensive with the range to which the human mind has been applied.

I love the casual mention--in 1957!--that humans are already in the age of nuclear fission and prospective interplanetary travel. Do we still live in the age of nuclear fission and prospective interplanetary travel? Or did we leave it behind somewhere along the way and move to another age?

It's not that these the predictions by Simon and Newell are necessarily incorrect. But many of tehse problems are evidently harder than they thought. For example, computers are now stronger chess players than humans, but it took until 1997--with vastly more powerful computers after many doublings of computing power via Moore's law, before IBM's Deep Blue beat Garry Kasparov in a six-game match. Just recently, computer programs have developed that can meet a much tougher conceptual challenge--consistently drawing, betting, and bluffing to beat a group of table of five top-level human players at no-limit Texas hold 'em poker

Of course, overoptimism about artificial intelligence back in 1957 does not prove that similar optimism at present would be without foundation. But it does suggest that those with the highest levels of imagination and expertise in the field may be so excited about its advances that they have at tendency to understate its challenges. After all, here in 2020, 63 years after Simon and Newell's speech, most of what we call "artificial intelligence" is really better-described as "machine learning"--that is, the computer can look at data and train itself to make more accurate predictions.  But we remain a considerable distance from the endpoint described by Simon, that "the range of problems they [machines] can handle will be coextensive with the range to which the human mind has been applied."

A version of this article first appeared on Conversable Economist

 

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • Colin Goddard

    This blew my mind

  • James Morley

    Excellent article

  • Victoria Green

    Another thought provoking read

  • Yves Thorpe

    Good post

  • Dean Maxwell

    Fascinating !

  • Adam Hussain

    Incredible....

Share this article

Timothy Taylor

Global Economy Expert

Timothy Taylor is an American economist. He is managing editor of the Journal of Economic Perspectives, a quarterly academic journal produced at Macalester College and published by the American Economic Association. Taylor received his Bachelor of Arts degree from Haverford College and a master's degree in economics from Stanford University. At Stanford, he was winner of the award for excellent teaching in a large class (more than 30 students) given by the Associated Students of Stanford University. At Minnesota, he was named a Distinguished Lecturer by the Department of Economics and voted Teacher of the Year by the master's degree students at the Hubert H. Humphrey Institute of Public Affairs. Taylor has been a guest speaker for groups of teachers of high school economics, visiting diplomats from eastern Europe, talk-radio shows, and community groups. From 1989 to 1997, Professor Taylor wrote an economics opinion column for the San Jose Mercury-News. He has published multiple lectures on economics through The Teaching Company. With Rudolph Penner and Isabel Sawhill, he is co-author of Updating America's Social Contract (2000), whose first chapter provided an early radical centrist perspective, "An Agenda for the Radical Middle". Taylor is also the author of The Instant Economist: Everything You Need to Know About How the Economy Works, published by the Penguin Group in 2012. The fourth edition of Taylor's Principles of Economics textbook was published by Textbook Media in 2017.

   
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline