1.1 Modern Beginnings
In October 1950, Alan Turing proposed the now-famous question, “Can machines think?” This question presents several complexities, making the answer to this question difficult to determine. What exactly is meant by “machine”? And how do we define “thinking”? The variance in definition and meaning of these words makes answering this question problematic. For example, how would you define “machine”? What are the elements that classify something as a machine? Can something biological, such as an ant, be considered a machine? Turing defined machines as digital computers that operate based on stored programs and constraints and follow instructions precisely.1 How would you amend this definition to fit the standards of the 21st century? Put Turing’s definition of “machine” into an AI chat (such as ChatGPT), then put your amended definition below that definition and see what the AI model thinks of your amendment.
Click here to view a list of available activities.
The Turing Test
Turing deliberately avoids a specific definition of thinking, understanding that the nuances in varying definitions could be prohibitive in determining whether a machine is thinking. Instead of defining “thinking,” Turing opted to address the question of “Can a machine think?” by using a variation of the imitation game as a means or proxy for determining thinking that focuses on observable behavior rather than abstract definitions. This test of “thinking” became known as the A test created by Alan Turing that is used to determine a machine’s ability to engage in intelligent behavior equivalent to that of a human being.. The test follows a set of basic rules:
-
There are three participants: a machine (Participant A), a human (Participant B), and an interrogator (Participant C, also a human).
-
The interrogator asks questions to Participant A and Participant B through an intermediary (i.e., text) so the interrogator cannot see who is answering.
-
The goal is for the interrogator to determine which participant is the human and which is the machine.
-
If the machine convinces the interrogator that it is human, it indicates that the machine exhibits behavior consistent with thinking.
The Turing Test is a useful proxy for thinking because it requires a machine to demonstrate reasoning and analytical abilities in real-time, adapting its responses based on the specific scenario it faces. Rather than examining how the machine thinks internally, the test emphasizes the outcomes, the observable behavior, as the key evidence of thought.2
Artificial Intelligence
Alan Turing’s research set the stage for what has since been classified as Computer systems that are capable of simulating human intelligence to perform complex tasks.. However, the term was not coined until about five years later when John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon proposed a summer research project, known as the Dartmouth proposal, on artificial intelligence. This two-month-long project brought 10 researchers together to look at using machines to simulate every aspect of learning or feature of intelligence. They proposed research that would tackle problems in the following categories: automatic computers, programming computers to use language, neural networks, theory of calculation size, self-improvement, abstractions, and randomness and creativity.3 This ambitious undertaking moved AI from the area of theory and conceptual ideas to a serious academic and scientific discipline. The project did not establish any physical or working AI models. However, it did establish the areas of machine learning, natural language processing, automated reasoning and problem-solving, and neural networks. All of these concepts are foundational to AI.
After the Dartmouth proposal workshop, interest increased in the development of artificial intelligence, not only in science and academia but also among the general populace. Companies were excited by the potential and the revenue generation that could be made through the development of AI tools. The development and implementation of various AI systems fueled this excitement. Many of the first AI tools were actually geared toward playing games and solving puzzles. Chess has always been a game that is associated with intelligence. With nearly endless moves and strategy, mastering the game is reserved for those with only the highest levels of intelligence.
Around the same time as the Dartmouth proposal, three computer scientists from the RAND Corporation—Allen Newell, Herbert A. Simon, and J. C. Shaw–collaborated on an AI project that mimicked human problem-solving. This system was called the Logic Theorist and eventually proved many mathematical theorems from the Principia Mathematica and is cited in a few academic research papers. This system was built to operate within the proof space of mathematics; its directive was to find a sequence of logical steps that would lead to the proof of a mathematical theory. This system demonstrated the potential of AI in the area of complex reasoning, which was a significant milestone in the development of AI systems.
Following the development and design of the Logic Theorist, Mack Hack VI (a fantastic name) adapted the system to develop the first chess AI system in 1966. Mack’s program applied the sequential logical steps from the Logic Theorist to chess. Mack created the first chess AI program that competed in chess tournaments, beating many novice players and securing a chess rating. This same chess program served as the foundation for IBM Deep Blue, which beat Grand Master Garry Kasparov in 1997.
Early Optimism
The work and development of AI tools and programs in the 1950s and 1960s were on track to change our relationship with technology fundamentally. These promising innovations fueled excitement, hype, and speculation about the future of AI. Check out the following quotes from reputable journalism about the hopeful future of AI:
In February 1965, Time magazine stated:
By 2000, the machines will be producing so much that everyone in the U.S. will, in effect, be independently wealthy. With government benefits, even nonworking families will have, by one estimate, an annual income of $30,000-$40,000 (in 1966 dollars). How to use leisure meaningfully will be a major problem.4
For reference, $30,000–$40,000 in 1966 would be approximately $335,000 in 2024. If you would like to do the math yourself, here is the equation:
$$\textrm{Adjusted Value } = \textrm{ Starting Value} (\textrm{CPI in 2024 } \div \textrm{ CPI in 1966})$$
Just over a year later, the New York Times claimed:
By the year 2000 people will work no more than four days a week and less than eight hours a day. With legal holidays and long vacations, this could result in an annual working period of 147 days on and 218 days off.5