Artificial Intelligence: A Brief Introduction to Sci-Fi in real Life

Many people have heard the term “A.I.” before. Skynet from the Terminator series, Hal-9000 from ‘A Space Odyssey’, JARVIS from ‘Iron Man’ are some things that commonly come to mind. But what is A.I really? Dr. Joel Schipper presented a brief introduction on the 15th of October to speak on the philosophy surrounding this up and coming field of technology. 

To introduce the topic, he stated the basic goal of what A.I. is looking to accomplish: Do things like a human does, and understand how we make those choices. He then clarified the difference between Artificial Intelligence and Machine Intelligence. Although they seem to be the same, there is enough separation between the two to draw a line. Machine intelligence means something works autonomously and, works at “a high degree of intelligence” and it grounds itself in real world problems. Artificial Intelligence looks at recreating the logic of a human, but it assumes that everything is perfectly done. The sensors are all working, the small details are irrelevant, and it’s a clean and easy place. Now some new questions come up: what is autonomy, and what is intelligence? According Niels Nielson, autonomy means being “driven by immediate inputs and past experience”. Autonomy is a robot remembering a location and moving by its own, and this is aided through Machine Learning. With either form of intelligence, machine learning is an integral portion of it. Machine learning is the idea of a machine being able to take in data such as photos or numbers and changing how it works based off of that new data. 

The next question, “What is Intelligence?” is one that has had many answers but very little agreement. Does getting the right answer regardless of how you got there count? Is correctness the right way to think about it? Or does it mean acting like a human? How do we test the existence of intelligence? This is where the Turing test comes in. The Turing test is a scenario where there are two humans and a robot communicating with each other. One human asks both the human and the robot questions, and the human has to guess which one is the robot. If the human guesses the robot is a human more than 50% of the time, then it passes the Turing test. Seems pretty tough to beat right? Well the reality is that it’s already been beaten. A program named ELIZA, crafted in the mid to late 60’s, had been able to convince hundreds of people that it was a person, through basic psychotherapy tricks. This means that we still don’t have a good test for intelligence. So we have to keep on attempting to obtain it, and from there we can define it and test it. 

Dr. Schipper then goes on to talk about the different methods of creating intelligence. What you build the machine with may be one route, building a machine through proteins and acids rather than silicon. The actual structure could be the key as well; rather than funneling data through a circuit, you create something similar to a brain which does multiple signals at once. Even quantum computing has been considered as a path to intelligent machines. Until one of them actually succeeds though, we don’t know which path will produce a machine capable of intelligence.

From these approaches, there were also suggestions as to what intelligence requires. One requirement would be that the intelligence is able to understand symbols and ideas as a broad concept, such as recognizing a chair and understanding the concept of what makes that chair, a chair. Another requirement may be signal interpretation, or being able to sense something it touches or hears and being able to properly interpret it. Finally, the last requirement may be statistical reasoning. Statistical reasoning is what all the big names in A.I. are using. Watson, AlphaGo, Facial Recognition, and even the suggested search terms in Google use this approach, but some argue that it’s simply the use of a ton of data with a lot of processing power and not actually intelligence. 

As Dr. Schipper began to wrap the presentation he summarized these approaches but left a question for us to think on as we left. We have countless numbers of instances where a machine can think or do something better than a human, but nothing able to just do anything. So the question becomes, “If we’re going to make machines think, do they actually have to think like people?”

Leave a Reply

Your email address will not be published. Required fields are marked *

Close