Wednesday, September 21, 2011

Swarm theory to learning theory

I don't see a lot of people drawing connections between swarm intelligence and learning theory, and I don't know why. The two are inextricably linked. Swarm intelligence, for those of you who are not bio-researchers, sci-fi fans, or robotics aficionados, is that area of science where smart people in lab coats try to figure out how incredibly stupid beasts like ants and termites can build complex cities with skyscrapers that would actually put ours to shame if the scales were equal. Check out this link about the African termite. Air ducts, temperature regulation, recycling, they have it all.

The Artificial Intelligence (AI) community loves this stuff, because it gives them hope that they can actually build a smart robot. They have had enormous trouble trying to go at creating an artificial brain as sophisticated as ours by programming it in the classic model. "Let's see, now, this line of code says that if I'm standing in wet grass in sandals on a cold day holding a grocery bag and need to carry it up a slope, and if the slope is x1, y1.2 or greater and the moisture level exceeds..." You can see how they might run into trouble after a few hundred million lines of code. It starts to get buggy pretty quickly.

So what they've done, in an almost perfect pivot for anyone who appreciates a good (or bad) pun, is they've quit worrying about bugs and started studying them. They've quit thinking about people brains and started thinking about bug brains. They've discovered that the little critters only have so much code in their heads, very simple commands like, "if another ant has been here before, drop a pellet on this spot." Simple code, and not much of it. But when you put 20,000 of these tiny, illiterate insects together randomly, and turn them loose to follow those simple commands, they actually behave as though they were intelligent. As though there were some master plan or brilliant top-down management. The whole is truly far greater than the sum of the parts. (Which is one reason this is also called "emergent intelligence," the idea being that smart behaviors emerge from non-smart creatures under the right circumstances).

You can see how an AI guy might really like this angle of pursuit. Not a billion lines of code, but a few hundred thousand. And guess what, it works. Their study has already yielded practical results, on the market today. By creating software-based pseudo-termites, little programs that only know how to do a few things, and filling up a software program with these agents, they can start to imitate, essentially to create, this kind of intelligence. Southwest Airlines uses what is called ant-based routing programs to help pilots find gates most effectively. The movie industry creates all its huge battle scenes this way now, in software programs, by giving all the animated characters the same set of rules (like, "Try to take the enemy's head off with your sword," and, "If you lose your sword, stab them with your knife."). This technology was pioneered in a program called Massive, used first in the Lord of the Rings trilogy. And that turned out pretty well.

But AI scientists want more, of course, they want a human brain. And they have made some cognitive connections that are stunning. Like, for example, they will tell you that the entire human brain works just like an ant-based routing system. The neurons are the little bots, and the synapses are how they communicate with one another, essentially how they are thrown together randomly and run around together. These programmers now figure that if they can get the right "little bot" coding, they can recreate a human brain. The old saw that a software program can only do what you program it to do just doesn't hold anymore. Going about it this way, it might do anything (Anyone remember HAL? or I, Robot?).

Okay, so what has all that got to do with learning theory? Well, everything, considering that what the little bots end up with is more, and different than, what they started with. The entire process of what we call intelligent behavior could well be called learning. There may not be a difference. But aside from everything, the answer is: let's focus on small, simple codes and instructions. Did you read Good to Great, by Jim Collins? (If not please do... I'll wait. It's worth it.) Jim's research came to the conclusion that great companies, not just good ones, but truly great companies have a singular focus on a few, easily understood, highly motivational principles. He calls them together the "Hedgehog Principle" for reasons you will understand if you read the book. What he's saying is, that people who focus on a few simple rules, commands if you will, and get a whole lot of semi-intelligent agents (ie, employees) out there doing everything they can to make those simple things true, you will have something truly outstanding emerge. Like maybe, the equivalent of a 160-story skyscraper. Or Southwest Airlines. This is empowerment to the Nth degree. So long as the employees are pursuing these simple goals, and they're the right goals, they will create something unstoppable.

Now, from learning to training: Train the goals. We spend a lot of time as educators, trainers, e-learning professionals, focusing on cognitive theory, learning preferences, gap analyses, performance assessment... but what if, without abandoning all that, we just lowered the intensity a bit? And what if we raised the intensity of identifying those four or five key things that, if everyone knew and followed them, and then applied them to their own circumstances, would result in something bigger than the sum of the parts? What if we let go of a little control, focused more on measuring that our learners understood and bought into the main mission, and that they could (and that they wanted to) apply it to whatever they did, and we focused a little less on analytically measuring whether they can lock the widget onto the wonket? They'll figure out widgets and wonkets, if they know the end game, and are committed.

Swarm theory. It's a learning theory waiting to be applied.