Response to Little Robots

by redshift

Scott Adams has a great new post on free will called The Little Robot That Could. Adams is a stout determinist (that’s the opposite of a free willy) but I don’t think he properly defended his position in the post. Here’s what I think. (Go read his post or this won’t make any sense.)

The robot’s instructions are to free will as neurons are to consciousness. This is not a defect in the instructions, merely a lack of quantity and complexity. The human mind, which most determinists would say is identical to the brain, operates through astronomical numbers of unintelligent bits. Individually, the neurons only transmit charges; together, they awaken. Droplets form waves, bits form computers, ants form a hive. The robot, however, lacks complexity. It does not awaken.

The robot lacks the ability to reflect on its own programming and make introspective judgment. AI is simply not to this level, and particularly not when described as in your post. It could not, for example, weigh its options internally, ignoring or rewriting the code that tells it to use counters in the decision making process. Similarly, the wave could not evaluate its strength in various places and redistribute water to form a more even surface. The (modern) computer may only recode itself to a level previously coded.

The ant colony, if I may digress, is more interesting. The hive *can* evaluate itself and form new principles. It can redistribute ants as needed, adapting to new challenges. Ants could be considered beefed-up neurons. The hive is one of the best mental models we have, in my opinion. Ants pass signals. They are ignorant of the larger structure they compose, namely the hive. Individual ants are inaccessible from the hive, just as neurons are inaccessible from the mind.


Waves of ants follow the miniscule instructions they’re capable of following, not directly orchestrated by any larger entity, and yet the sum of these minute actions accomplishes very important goals. Survival of the hive depends on it. The more ants you have, the larger and more complex the goals become. Now you’ve got an organized, goal-seeking, adaptive, introspective entity on a level above its constituent parts. Is it really so different from a human mind? Self-awareness may seem lacking, but I dare say that’s arguable, particularly if complexity rose above a normal hive’s level.

Note that I’m *not* making an argument for free will. That should be obvious, considering that I neither framed this as an argument nor said that consciousness is the *only* prerequisite for free will. I’m merely saying that the robot example is flawed unless you accept that free will is possible without consciousness.

I’m free will-agnostic. It’s just an interesting topic. (Obligatory reference to GEB here.)