Friday, November 8, 2013

Humans, AI won't be replacing you... yet

Nov 07, 2013

By Andy Ho, Senior Writer


CIRCA 2045, argues author Ray Kurzweil, machines will become smarter than people.

In his popular 2005 book, The Singularity Is Near, the moment when this happens is called the "singularity". The bedrock idea is that machines with artificial intelligence (AI) that matches the human level can be built in the lifetime of those of us who are alive right now.

With greater and faster processing power, such a machine would be able to reprogram itself into one more intelligent than itself. As this machine would be more intelligent than the most intelligent one people can make, it would have superhuman intelligence.



But once this happens, this super-intelligent machine would go on to reprogram itself into a machine that is even more intelligent. This hyper-intelligent machine would then reprogram itself into an ultra-intelligent machine and so on, exponentially, perhaps without limit.

Based on known rates of advancements in computer processing power and related fields, Kurzweil figures that the tipping point in this hypothetical process of self-amplifying growth in AI capacity will come in 2045. If it does, with limitless intelligence on tap, the biggest worry is whether these ultra-intelligent machines might render humanity completely redundant.

All this was just a fringe idea that serious academics wouldn't touch until Australian National University philosopher David Chalmers published the first formal analysis of the singularity's possibility in a peer-reviewed journal. His 2010 paper in Journal of Consciousness Studies attracted responses from 26 experts from various fields that were published in the same journal in 2012.

Professor Chalmers' analysis of what is already known in AI, neuroscience and the philosophy of consciousness led him to feel that human-level AI would be "distinctly possible" before 2100.

Because of hardware and software advances, such a system would have the capacity to amplify intelligence. When this happens recursively - as a procedure that can repeat itself indefinitely - then the singularity would be possible, he felt, "within centuries", rather than decades.

However, critics argue that the whole enterprise could actually founder at the very first step: emulating normal human intelligence, which proponents implicitly assume to be located in the brain, an organ they liken to a computer, which is a machine.

If, as is likely, the human brain is more than a machine, then it actually can't be perfectly emulated. That is, apart from functioning like a mechanical computing system, if at all, the brain also has non-mechanical processes. But if this is so, then no machine can perfectly emulate a brain. And if so, an AI system will not actually have a mind, which means it won't be able to attain even the human level of intelligence, which is the singularity's takeoff point.

Proponents put the cart before the horse because they ignore the very question of what intelligence itself really is. Real human intelligence depends on cognition, which is always carried out by a living person, so it is "embodied" and always within specific human situations - thus it is "situated".

This human cognition comes about through human interaction with the environment. This interaction is always carried out through the body's finely tuned sensory and motor systems. That is, the senses deliver environmental stimuli to your mind and your mind directs the actions which you carry out or enact in the real world through your body and limbs.

For there to be real human intelligence, the person must be able to autonomously experience the environment directly to decipher what the issues are and decide what to do about them.

That is why the human sensory and motor apparatus is crucial to human intelligence. But this is something that an AI system, even if it could emulate the brain, cannot achieve. It would be more akin to being just a "brain-in-a- vat".

For AI to beat us at our own game, it would need to be embodied in robots that can interact autonomously with the environment. But this would require that robots have sensors and effectors that are as perfect as the biological sensory and motor systems that humans possess.

The complexity of these systems is found not just in our five senses and our limbs, but also in how all bodily systems are integrated. This is true right down to the hormones and neurotransmitters at cell level. They can react in real time to environmental stimuli (which trigger chemical signals in the body).

Without emulating this finely tuned biocomplexity, robots with AI may achieve the intelligence level of an insect at best. Conversely, to build a robot to emulate this sensory and motor complexity will require so much coordination of research efforts across so many disciplines that progress, if any, would be dead slow. Thus the lack of a biological body is the biggest hurdle that any AI system attempting to end humankind must overcome.

The other fundamental hurdle is that an AI system can't autonomously tell the difference between a meaningful problem and a meaningless one. Nor can it distinguish between a significant inference and a trite one. A human programmer or quizmaster must be on hand to guide it.

This is because human objectives depend on human values, which we acquire culturally in the context of years of real life experience in the real world.

Since something as inchoate as values can't be programmed, the AI field has always sidestepped the issue of values. In place of values, only plain goals and straightforward constraints like "find a power source to recharge when battery is low" or "avoid collisions with all objects" may be programmed in. Without values, however, such AI systems will always lack common sense and real- world understanding. Thus, achieving the level of real human intelligence alone could well be unattainable. If so, the singularity's possibility is vanishingly small, and the idea that robots with AI may one day take over the world implausible.

No comments: