THE #1 AV NEWS PUBLICATION. PERIOD.

Chasing Westworld: Machine Learning and AI

I went to an amazing conference put on by the TEA this year on Storytelling, Architecture, Technology and Experience. There were a number of great speakers and subject matter experts that took the stage, including a panel of folks that were discussing Westworld and how close we may be to a world where life-like robots live among us.

Animatronic legend Garner Holt was there and shared his advancements in robotics and in creating life-like facial expressions. If you haven’t seen his latest Abraham Lincoln bust, it’s a must see and it illustrates how close we could be to building humanoid robots that at least look the part.

https://www.youtube.com/watch?v=dJg2Caz3TF0

Later, an incredibly well educated and accomplished CTO from Xperi, Steven Tieg, took the stage. He addressed the AI side of the equation as it pertains to creating truly intelligent robots.

One of his analogies was that of an ant.  It went something like this.

  1. An ant is in essence a simple machine.

  2. When put in a complex environment, the simple ant behaves in complex ways.

  3. A colony of ants placed in a complex environment show an exponential degree of complexity. 

His analysis of current advances in AI was that it has been moving slower than necessary because it is all being done in a lab. He asserted that once AI was introduced into the complex world, it would adapt quicker and become complex, much like the ants. I can’t disagree with that assertion at all.

The ant theory is a very interesting and well thought out analogy and being somewhat of a biology geek myself, one I really identify with.

I do find myself asking one question though when looking at it.

Is an ant really just a simple machine?

Given that the foundation of the analogy hinges on that assertion, the answer to that question really makes a difference. We’ll come back to this a little later.

In the world of AI theories similar assumptions are made all the time. Most AI endgame predictions are that once computers achieve and or surpass the complexity of the human brain, including the ability to perform large amounts of parallel processing, that the machine will inevitably “wake up.” The argument is that in the animal kingdom vertebrate brains developed enough complexity to allow for a phenomenon called re-entry to enable conscious scenes, meaning that the brain, once complex enough, woke up. (It’s pretty intense reading, but if you want to explore re-entry, have at it here).

There is only one major problem here. We’ve never actually observed a biological “awakening.” We’ve never seen an organism spring into consciousness in the biological realm. We know consciousness exists because we have it, so everything done is in an effort to reverse engineer the phenomenon. To make matters more difficult, many neurologists don’t believe that consciousness is an observable phenomenon at all separate from normal brain function.

None of this means a computer couldn’t “wake up” someday.  It just means that the very basis for that prediction, in itself, is a potentially improvable theory.

No let’s go back to the ant and it being essentially a simple machine. On the surface that is a fairly solid theory, given the small variety of tasks an ant carries out and that an ant has a brain with about 40,000 times LESS brain cells than a human. An ant isn’t complex enough to have the sentient consciousness of mammals and birds, however it has something a machine does not. It is self directed. Even single celled organisms with no brains at all somehow have some type of drive to go out and find the resources they need to survive. There is some innate drive built into them to get what they need to survive. Simple machines don’t exhibit this same innate drive.

So is an ant just a simple machine, or is it more?

My point is that the theories of AI and machine learning, if they must be compared to biology, mirror scientifically observable phenomena like Natural Selection much more than they do theories like macro-evolution or awakening.

AI and Machine Learning are given a starting point. They are given a data set of important things to observe and look for and then they are given an algorithm through which to interpret that data, store the results in memory, and then reinterpret it over time, all the while changing the algorithm as time goes on to more accurately achieve the desired results (the “learning” part), making them in essence “Intelligent.”

The danger is that once the algorithms are set in motion, the way they adapt is not very predictable.  A quick Google search showing you AI’s classification of cats as dogs or even the bizarre iPhone “i” to “A?” auto-correct issues all show how far we may be from machine learning actually improving upon the algorithms they are given, rather than altering them to produce nonsensical outcomes.

Imagine if those outcomes had implications greater than returning the wrong image or messing up your text messages?

Maybe someday we will have machines that make increasingly better and better decisions, ones that can interpret the world much more accurately than they can now, and that look and seem human upon initial inspection.  As a final thought however, how many of the decisions we make as humans consider only the data at hand?  If there’s one thing humans are, it’s that we are predictably irrational.  Emotion often overrides reason, especially when it comes to making ethical decisions. In fact, when we feel that a particular action is the right one in our hearts or in our guts, we typically employ reason to rationalize our decision in our heads, despite any data that may suggest the opposite.

I don’t pretend to be smart enough to have all the answers, just smart enough to ask some good questions.  I can’t say we won’t ever catch Westworld, I’m just skeptical that it is as close as some may think it is.  I don’t doubt robots will be part of our future society, but it’s uncertain at best that our humanoid counterparts will actually “wake up” at some point.

In any case, AI is definitely a field to watch and to make sure we are out in front of, even if there is only a minor chance that the ghost in the machine escapes.  I’ll leave you with the same thought Steven Tieg left us with at the TEA event.

“A machine that is sufficiently interesting enough to be intelligent, will find us [humans] to be less and less interesting to them.”

Sleep tight with that thought!

 

 

 

Tagged
Top