Sunday, October 25, 2015

Junk Science: No, Slime Mold Isn't Intelligent

I saw this on YouTube:


Anthony's Carboni's misleading comment here earns this video a coveted "Junk Science" badge from me. Why? Well, it's not for the science per se... it's due to his claim that the mold finds the shortest path between food sources, with "no wrong turns" in the maze. Now, that may simply be a poor choice of words, but in science education, words matter. As I explained in the comments,
"No wrong turns" is not how I would describe what I just saw in this video. The mold filled the maze and then retracted to leave a path with the smallest surface area once the goal had been found. In science it's important to be accurate in our observations before we start wondering if the behavior we observed is intelligent.

Here's WHY the choice of words matter. In response to this comment I was asked, "How do you know the mold filled the maze and then retracted?"

Ignoring the fact that I just said how I knew it in the paragraph above, I'll expound: I watched the video and paid attention to what I saw.

Freeze the video at :52 seconds. What do you see?

No wrong turns? BUSTED.

There are plenty of "wrong turns". The mold simply takes all paths at the same time. Now forward to 54 seconds. What do you see? 

The correct solution, by brute force approach.

A jump-cut to support Carboni's misleading statement "No wrong turns". In fact, the mold made no turns at all... it expanded outward to fill the available space, which happened to be maze-shaped. In computing, this is described as a brute-force approach. The mold's biology allows it to explore every path simultaneously, much like a program that spawns separate threads to explore every decision it must make. Brute force solutions are quick and accurate if you have the means to implement them, but they're pretty much the opposite of "intelligent".

But the question did make me look for another source to validate what my eyes told me here. Sure enough, more detailed descriptions reveal that this is exactly what happens. The mold expands outward, and as it retracts, it leaves behind slime. It "understands" absolutely nothing about its environment. It doesn't re-try used paths simply because slime mold won't grow on pre-existing slime. A bit more digging uncovered the original video, uncut and reasonably titled:


Note that in the actual experiment, the mold doesn't just expand to fill the space, it's deliberately placed by the researchers to fill the space before the experiment begins. This is a starting condition of the experiment. The original researchers do not interpret this as intelligence, and are very clear as to the mechanisms they think are being exercised. In Scientific American, however, Ferris Jabr accurately describes the behavior, but then proceeds to misinterpret it as intelligence. Fortunately, the headline writer almost got it right:
"How Brainless Slime Molds Redefine Intelligence"
Replace "Slime Molds" with "Reporters" and it's spot-on. In order to interpret this as intelligence, we have to redefine the word. It's not the mold that does that... it's the person reporting the findings. The further removed he is from the actual research, the more sensational the headline.

Again, we must be accurate in our observations before we start inferring intelligence. For instance, planets are not round because they "know" that a sphere is the most compact solid form. Nor does a meteor follow the most energy-efficient path to a planet's surface because it has the uncanny ability to compute such a path. They do these things because the laws of physics make them inevitable.

Likewise, if a body is amorphous (like a slime mold's) and needs to connect to as many food sources as possible, how could it not wind up tracing the shortest path between them? The mold isn't going to survive if it just spreads out and dries up. The nutrients must be disseminated throughout the organism, and in doing so efficiently, the organism is going to retract to the most economical shape. It cannot help but conform to the laws of physics and chemistry.

The fact that this solution is not intelligently arrived at doesn't mean we can't USE the phenomenon intelligently. But we don't have to imagine that it's something it's not while doing so.

--==//oOo\\==--

The obvious and inevitable rebuttal to all of this is the assertion that we don't really know what "intelligence" is... so how can we say that what the mold is doing isn't intelligent?

That's actually pretty easy, particularly to anyone who's ever tried to program an artificial intelligence. Basically, I won't give a biological organism more slack than I'd give a machine when evaluating its intelligence.

When you look up "intelligence", you find a laundry list of attributes much more varied and broad than the few listed by Carboni, all joined with the conjunction "and". Intelligence is a lot of different abilities taken together. One of the reasons that a true artificial intelligence has not yet been created is that many of these things are extremely hard to implement without biology. And these same things are crucial to the development of intelligence. Biological organisms are bristling with sensory apparatus, even in a single-celled creature. Our computation and our chemistry are integrated in a way that is simply impossible to build into a computer. And it turns out that chemistry and physics naturally follow behaviors that are difficult to express mathematically. Because of this, when we see such behavior (as in the slime mold) we tend to jump the gun and conclude "intelligence!"

But while defining what intelligence is may be very difficult, that's not the problem at hand. It's not terribly difficult to point out something that's not intelligent. There are a vast array of machines and objects that do seemingly intelligent things without expending a thought. Nevertheless, we know that they're not intelligent, as the behavior they exhibit is produced "mechanically" (according to strict rules, be they physical, chemical, or software-modeled).

Some of these are obviously mechanical, such as the MONIAC computer, a water-powered analog computer that models the economy. Some are less obviously so, such as IBM's Watson computer that won Jeopardy! Siri, a chess program, or the ghosts in Pac-Man may appear to be intelligent: they're not. AIs that are designed to pass a Turing Test are deliberately structured to conceal their nature in order to 'game' the evaluation. That one may fool an examiner doesn't make it "intelligent" any more than a lifelike sculpture is an actual person.

Personally, I'd say the slime-mold's behavior is pretty obviously mechanical, as it behaves in exactly the fashion and using exactly the techniques that it would had it been artificially designed in a way that uses no decisions whatsoever to solve the problem. In other words, it's the solution that I would implement if I couldn't be bothered to design an AI. That's how I understood what it was doing at first glance.


No comments:

Post a Comment