Artificial Intelligence

By on Dec 19, 2018 in Technology

Artificial intelligence—defined in the Oxford English Living Dictionary as “the theory and development of computer systems able to perform tasks normally requiring human intelligence”—is booming. Last year, technology market intelligence p

Gary Smith

rovider International Data Corporation forecasted a 54.4% compound annual growth rate of corporate global spending on cognitive and AI solutions through 2020, when AI-related revenues will top $46 billion. In 2016, global financial advisor UBS said, “We expect AI’s industry growth will start to explode and its impact on business and society will begin to emerge” by the end of the decade, pointing to a future where “AI-powered machines and software will likely start to untether from human supervision, embarking on their fateful path as sentient beings.” AI proponents tout such potential benefits as efficiency, elimination of manual tasks and new solutions for social problems.

But not everybody is on the wagon. In 2015, for example, a host of business and academic leaders including Elon Musk, Steve Wozniak and the late Stephen Hawking signed an open letter urging restraint in the development and application of AI. “It is important to research how to reap its benefits while avoiding potential pitfalls,” the signatories said.

Gary Smith, a professor of economics at Pomona College in Claremont, Calif., and author of the just-published book, “The AI Delusion,” also advises caution. Professor Smith offered his thoughts in an interview with The Balance Sheet.

Q: Professor Smith, what do you see as the biggest misconception about artificial intelligence?

A: That computers are smarter than humans. AI algorithms excel at narrowly defined tasks that have clear goals, such as tightening bolts, checkmating a chess opponent or reducing a building’s energy consumption. These tasks can be very useful, but AI doesn’t “think” in any real sense of the word.

Q: Why do computers have trouble moving outside those narrowly defined tasks?

A: Because it’s so difficult to mimic how the human brain understands the world. A revealing example of AI’s limitations is the Winograd Schema Challenge, a test of machine intelligence: “What does ‘it’ refer to in this sentence: I can’t cut down that tree with that axe; it is too thick [or small]’. Current AI programs don’t know because they don’t know what the words really mean. As one prominent AI researcher said, how can machines take over the world when they can’t even figure out what “it” refers to in a sentence?

Q: But haven’t AI systems mastered some incredibly complex tasks, such as the Asian board game Go?

A: A freakish, superhuman skill at board games is great for publicity but has little to do with making real-world decisions that require critical thinking. AI is very good at data mining, but data mining is fundamentally flawed because we think that patterns are unusual and therefore meaningful. In Big Data, patterns are inevitable and therefore meaningless. The bigger the data, the more likely it is that a discovered pattern is meaningless.

Q: What do you regard as misapplications of AI?

A:Anything that’s based solely on identifying patterns without considering whether the patterns make sense, such as evaluating job applications and loan applications and picking stocks. Also, courts all over the U.S are using computer models to make bail, prison-sentence and parole decisions based on statistical patterns that may be coincidental but can’t be evaluated because they’re hidden in black boxes.

Q: What’s the thesis of your book, “The AI Delusion”?

A: The elevator pitch is that the real danger today isn’t that computers are smarter than us, but that we think computers are smarter than us, and consequently we trust them to make important decisions for us.

Q: Do you think AI could reach the point that it could be trusted to make important decisions?

A: Yes, but it’s a long way off because computer algorithms would first have to truly understand the world and what words means, plus have the capacity for common sense, wisdom and critical thinking.

Q:  And they don’t?

A: Correct. Many people are trying to develop algorithms that have these qualities, but it’s very difficult. As of now, computer programs don’t possess anything resembling human wisdom and common sense. Douglas Hofstadter, one of the original AI giants, said, “There is absolutely no fundamental philosophical reason that machines could not, in principle, someday think, be creative, be funny, be nostalgic, be excited, be frightened, be ecstatic, be resigned, be filled with hope . . . But all that will come about only when machines are just as alive and just as filled with ideas, emotions and experiences as human beings are. And that is not around the corner. Indeed, I believe that it is still extremely far away.”

I agree. At this point in the development of AI, we should be very skeptical of turning important decisions to computers.