Thursday, October 13, 2011

Speed to expertise in five easy steps

By now we all have heard of Malcolm Gladwell's "10,000 hour rule," which he put forward in his book Outliers. If you want to master something, you need something like ten thousand hours of serious, dedicated practice. Only then can you become truly outstanding. This isn't original with Gladwell, three guys named Ericsson, Krampe, and Tesch-Romer were writing about this in 1993 (look here). It's a very persuasive argument, but it doesn't help much when you're trying to teach students or train employees. "Okay, I think you've got the hang of it now. Keep practicing that, and I'll be back in about... oh, ten years or so to see how you're coming along."

The fact is, we need an acceptable level of mastery, and we need it now. We don't need to split the arrow; we need to hit the target. We need competence. So the question becomes, how do we get this level of mastery out of a novice within the normal time frames of actual life. Or if not life, then education--classes, courses, semesters, and programs. Is it possible? How is it possible?

Here are five answers which, interestingly enough, come from the field of... expertise. Yes, there are people whose expertise is expertise. David F. Feldon is one of them, and he has an oft-quoted paper to prove it... all about the role of expertise in pedagogy and curriculum. Download it here, It's worth a read. But then, if you were looking for a scholarly article you probably wouldn't be storming the blogosphere, so let me pick a little low-hanging fruit and serve it up with the chilled wine of my own experience and observation. (I'll try to keep the cheese to a minimum.)

One. It's not just knowledge. The popular idea that experts simply know more is untrue. Wait, they do know more... that part is true. It's the "simply" part that is untrue. What experts do with their knowledge, how they manipulate it, how they categorize it, how they think about it, or don't think about it... that's what makes them experts.

Practical application: Knowledge is only one factor, so don't focus on it exclusively. "Death by PowerPoint" leads not to expertise, but rather to death. Or at least a semi-comatose state. You really need to start with how experts got to be experts in the first place.

Two. Don't count on experts to tell you how they got to be experts. The accuracy of self-reports by experts has been studied, and the results are not encouraging. Just to pull one of Feldon's little nuggets from its context: "...self-report errors and omissions increased as skills improved." (p.99) What this means is, many experts don't know, and may even misrepresent, how they got so good. My own experience is that world-class experts, and I've dealt with a few, are often wrapped snugly in the soft, warm blankets of ego, and will always have an answer for you, even if it's wrong. Or even if it comes not from sober reflection but rather from that mini-myth that is the human self-image. Which in experts has sometimes grown greater, and more mythical, in concert with the accolades.

An example... the big league pitcher who explained that the secret to his curve ball was the way he spun his fingers as he let go of the ball. Until slow-motion cameras clearly showed that the ball had already left his hand and traveled a foot or two before those fingers did their twitching.

Practical application: Let Superman save you, but trust Lex Luthor's data. (Hey. He's a scientist.) Use and choose content based on the data--what the experts in your field actually do, and don't do, as measured by people whose job it is to measure it.

Three. Big Buckets. Experts know how to sort through new data, new situations, contradictory information, just about anything that falls within their area of expertise, and place it into an appropriate category in order to address it, resolve it, or perhaps ignore it. You've seen this a thousand times, whenever you're dealing with an expert in an area in which you are not one. Take me and my auto mechanic. "The car kind of vibrates and makes a funny noise," I say. "Does the steering wheel vibrate?" He asks. "The whole car!" I answer. But what he's doing is going through systems. He's categorizing, eliminating possibilities. If the steering isn't particularly affected, it's probably not front end alignment. So then he probes some other category. "Is the noise always the same?" "Well," I answer, "it's a lot less noticeable when I turn the radio up." (I did say I wasn't an expert.)

Practical application: Teach categories right from the start. The categories are something you actually can get with accuracy from those same vaunted experts... and you want the categories that the experts use, not the ones that the textbook writers use. It may take a little probing, but your SME will give you his or her big buckets. And those buckets are what learners need, so they have a practical place to keep and carry all the knowledge from those PowerPoint slide decks.

Four. Principles and theories. Experts know their way around the theoretical framework of their field, and are not typically hindered by what the answer "should be." A novice will assume a required solution and try to get there any way possible, usually by trial and error, while an expert will back up, assess all the inputs in light of their whole frame of reference, which is bounded only by the principles that underlie all solutions. Then the expert will get to the answer on a much deeper, more permanent level.

Here's an example. Remember the "I Love Lucy" scene where the candy conveyor belt was moving too fast for Lucy and Ethel? Me neither, not that old, but I found it online here. This is classic comedy, but it's also classic (though exaggerated) novice behavior. "Stop the candy from going by too fast" is their assumed solution, what they're trying to accomplish, but their frame of reference is limited to what they know how to do (grab candy, wrap candy, eat candy), and so everything they try is a bigger, funnier failure. But an expert would understand the bigger picture, the more general concepts, and so would know how the conveyor belt works, where to find a kill switch, when to go get help to avoid a bigger crisis. Shut it down until you get it right, would be the expert's general principle.

Practical application: Teach the big, underlying principles. Maybe you're doing training, and you're thinking this is no place for graduate level content. But knowing only the nuts and bolts is not going to lead to expertise, or even to competence, but to chaos.

But then... the principles alone won't do it either. You need...

. Automaticity. Automatic-ness. The ability to perform the simple parts of complex functions subconsciously. Automatically. Experts don't have to think through everything, they can think about the higher-level requirements because the lower-level requirements are on auto-pilot. From Feldon (p. 98):
Expert sight-reading performance in music is a clear example of this process. While playing music with typical features, expert pianists rely on automated skills to recognize patterns and strike the appropriate keys in sequence (Lehmann & McArthur, 2002). Concurrently, they dedicate their conscious processing to dynamic synchronization with other performers. When the novelty or visual complexity of the sheet music exceeds the threshold of transfer for automated sight-reading skills, the musician engages in effortful, deliberate encoding to mediate the execution of the necessary subprocesses.
Translation: Sight-reading the music has to be automatic so they can focus on the art. When it's not automatic, they have go back and practice to make it automatic.

Practical application: Drill and practice, baby, drill and practice! Maybe it's out of fashion, but the fact is, "acquired automaticity facilitates the development of expertise." Maybe your learners need to know how to answer certain customer questions, or deal with certain patient behaviors, or use a handful of complex formulas over and over to get to complex solutions. Get the new recruits to memorize. Drill them until they can do the core parts of it every time, even if that means giving them six or eight or fifteen statements that should be said to customers, all the time. "Mantras," if you will. Work with learners on those few, key, foundational skills or knowledge sets until they are automatic with them, this will absolutely increase their speed to expertise.

To sum up, you can get to competence a lot quicker than 10,000 hours if you follow the path that experts take. Teach them the knowledge, but always within the big buckets that experts use to solve problems. Give them the principles and theories, the framework on which the required behaviors stand or fall. Drill them on the key behaviors that should be automatic.

And never let the experts convince you they know how they got to be experts. Unless of course, your experts are experts on being experts.

[new potential format addition: take a look here and let me know what you think.]

Wednesday, September 21, 2011

Swarm theory to learning theory

I don't see a lot of people drawing connections between swarm intelligence and learning theory, and I don't know why. The two are inextricably linked. Swarm intelligence, for those of you who are not bio-researchers, sci-fi fans, or robotics aficionados, is that area of science where smart people in lab coats try to figure out how incredibly stupid beasts like ants and termites can build complex cities with skyscrapers that would actually put ours to shame if the scales were equal. Check out this link about the African termite. Air ducts, temperature regulation, recycling, they have it all.

The Artificial Intelligence (AI) community loves this stuff, because it gives them hope that they can actually build a smart robot. They have had enormous trouble trying to go at creating an artificial brain as sophisticated as ours by programming it in the classic model. "Let's see, now, this line of code says that if I'm standing in wet grass in sandals on a cold day holding a grocery bag and need to carry it up a slope, and if the slope is x1, y1.2 or greater and the moisture level exceeds..." You can see how they might run into trouble after a few hundred million lines of code. It starts to get buggy pretty quickly.

So what they've done, in an almost perfect pivot for anyone who appreciates a good (or bad) pun, is they've quit worrying about bugs and started studying them. They've quit thinking about people brains and started thinking about bug brains. They've discovered that the little critters only have so much code in their heads, very simple commands like, "if another ant has been here before, drop a pellet on this spot." Simple code, and not much of it. But when you put 20,000 of these tiny, illiterate insects together randomly, and turn them loose to follow those simple commands, they actually behave as though they were intelligent. As though there were some master plan or brilliant top-down management. The whole is truly far greater than the sum of the parts. (Which is one reason this is also called "emergent intelligence," the idea being that smart behaviors emerge from non-smart creatures under the right circumstances).

You can see how an AI guy might really like this angle of pursuit. Not a billion lines of code, but a few hundred thousand. And guess what, it works. Their study has already yielded practical results, on the market today. By creating software-based pseudo-termites, little programs that only know how to do a few things, and filling up a software program with these agents, they can start to imitate, essentially to create, this kind of intelligence. Southwest Airlines uses what is called ant-based routing programs to help pilots find gates most effectively. The movie industry creates all its huge battle scenes this way now, in software programs, by giving all the animated characters the same set of rules (like, "Try to take the enemy's head off with your sword," and, "If you lose your sword, stab them with your knife."). This technology was pioneered in a program called Massive, used first in the Lord of the Rings trilogy. And that turned out pretty well.

But AI scientists want more, of course, they want a human brain. And they have made some cognitive connections that are stunning. Like, for example, they will tell you that the entire human brain works just like an ant-based routing system. The neurons are the little bots, and the synapses are how they communicate with one another, essentially how they are thrown together randomly and run around together. These programmers now figure that if they can get the right "little bot" coding, they can recreate a human brain. The old saw that a software program can only do what you program it to do just doesn't hold anymore. Going about it this way, it might do anything (Anyone remember HAL? or I, Robot?).

Okay, so what has all that got to do with learning theory? Well, everything, considering that what the little bots end up with is more, and different than, what they started with. The entire process of what we call intelligent behavior could well be called learning. There may not be a difference. But aside from everything, the answer is: let's focus on small, simple codes and instructions. Did you read Good to Great, by Jim Collins? (If not please do... I'll wait. It's worth it.) Jim's research came to the conclusion that great companies, not just good ones, but truly great companies have a singular focus on a few, easily understood, highly motivational principles. He calls them together the "Hedgehog Principle" for reasons you will understand if you read the book. What he's saying is, that people who focus on a few simple rules, commands if you will, and get a whole lot of semi-intelligent agents (ie, employees) out there doing everything they can to make those simple things true, you will have something truly outstanding emerge. Like maybe, the equivalent of a 160-story skyscraper. Or Southwest Airlines. This is empowerment to the Nth degree. So long as the employees are pursuing these simple goals, and they're the right goals, they will create something unstoppable.

Now, from learning to training: Train the goals. We spend a lot of time as educators, trainers, e-learning professionals, focusing on cognitive theory, learning preferences, gap analyses, performance assessment... but what if, without abandoning all that, we just lowered the intensity a bit? And what if we raised the intensity of identifying those four or five key things that, if everyone knew and followed them, and then applied them to their own circumstances, would result in something bigger than the sum of the parts? What if we let go of a little control, focused more on measuring that our learners understood and bought into the main mission, and that they could (and that they wanted to) apply it to whatever they did, and we focused a little less on analytically measuring whether they can lock the widget onto the wonket? They'll figure out widgets and wonkets, if they know the end game, and are committed.

Swarm theory. It's a learning theory waiting to be applied.

Tuesday, July 26, 2011

Feedback about feedback

Most of us who've been doing eLearning for any length of time know all about Kirkpatrick's Four Levels of assessment, and we work hard to get as high up that ladder as possible. What are learners learning, and how is it affecting their world? But how many of us know about the Four Levels of Feedback?

We know how important it is to provide learners with constructive inputs along the way, and to adjust instruction to match needs. That's formative assessment--basic learning strategy, right? If you're like me, though, feedback is mostly a check-mark. Is it built in somewhere, or is it not? If you've got it, that's good--and it will be good for the learner. The research base is well established. And if you're like me, you also know that providing more specific feedback ("The correct answer is X because...") is much better than providing less specific feedback ("Wrong!"). Make the feedback as positive and as ubiquitous as you reasonably can, provide some guidance for improvement, and drive on.

But then recently I stumbled on John Hattie and Helen Timperley, and the four "levels" of feedback they delineate in their article The Power of Feedback (Review of Educational Research, March 2007). Their work sums up a lot of other research, but makes it, I think, much more approachable--providing a practical "handle" for actually using it. It seems to me they've done for feedback what Donald Kirkpatrick did for training assessment. Let's take a look.

The Four Levels of Feedback:

1. Feedback about the task. This is the one that I, at least, tend to think of as "feedback" unless someone focuses me on something more or different. "That is correct, sir," or "You got it right!" or "You missed this, try that." It's corrective. The learner has to achieve something, and you want to let them know whether they achieved it or not, and help them improve. Turns out that while this is helpful, it's not the most helpful feedback you can offer.

Level One feedback can be, and usually is, built into eLearning in some fashion. It's often automated.

2. Feedback about the process. The "correction" in this case focuses not on the outcome, or the work product, but the way the outcome is being developed. "You've got an interesting result here, but did you come to it by the means we discussed in class?" Which may be another way of saying, you didn't get the answer right because you didn't go through the appropriate steps. But that subtle difference is important. The writing process, the design process, even the scientific method... you focus the learners on the process they're using, on whether or not they are taking the right steps, and you leave the result (the work product) more in their hands. You can see how it would energize a learner, provide motivation to go on improving his or her work product, when you give them room to fix it themselves by going back to their approach.

Level Two feedback can be done online, but it's tough to automate it, unless the process being taught is also online. Seems to me this will work better with a human agent in charge.

3. Feedback about self-regulation. Here it gets even more interesting. This kind of feedback is about neither the product nor the process, but about how the students view and/or make judgments about their product or process. This "addresses the way students monitor, direct, and regulate actions toward the learning goal." This sort of feedback can motivate students to want your feedback. Very simple examples: "What do you think about your progress so far?" or "Show me how you're coming along," or "Do you feel like you are getting better at this?" followed by, "What makes you think so?"

If you ask a learner if they believe they are getting better at a task, what do you generally get back in response? In my experience, it's something like, "Yes, I think I am improving. What do you think?" And what just happened there? The student is suddenly asking for your feedback. They are asking for input. This puts them fully in charge of their learning and their outcomes. It makes the specific feedback, whatever it is, all that much more powerful when it comes. Effective learners always have sharp self-assessment tools--and they can be sharpened through "Level Three" feedback.

Level Three feedback can easily be provided online. I would not attempt to automate it, but it really boils down to asking learners for some self-reflection. This can be as simple as: "Write a paragraph reflecting on your progress." (Getting learners into the habit of thinking about their learning strategies is another research-supported design element that can improve learning greatly.)

4. Feedback about the self. Here's a somewhat counter-intuitive level. If each one of the levels is more effective, done well, than the previous, and each one gets closer to self-motivation and self-efficacy, closer to the self, you'd think that this fourth level would be the pinnacle of effective feedback. But it's just the opposite. "You're a good learner," or "You're one of my best students," or "You have a knack for taking the wrong tack," are all bad feedback strategies. Why? "Praise addressed to students is unlikely to be effective, because it carries little information." (Hattie and Timperley). And in fact, praise can be demotivating. The learner reaction goes something like this: I thought it was about what I was learning, the goals I was achieving, and then you made it about me. I don't want to be your favorite, that doesn't motivate me. I just want to be good at this. When we make it about them, we also make it about us.

It has also been demonstrated, and logically so, that another way feedback can be demotivating is if the underlying message is that success comes from some natural state, such as being bright or sharp or old or young. The better message is that success comes to those who pursue it (see Carol Boston on The Concept of Formative Assessment, for more on this). "Hard work gets you to your goals," is a message about the process and the product, not about the learner.

Good news: Level Four feedback is very easy to avoid online!

So here's a little meLearning. Not only can lack of feedback hurt my learning efforts, the wrong kind of feedback, though well-intentioned, can hinder them as well. And the right kind can provide a serious uptick in both outcomes and learner motivation. All this resonates with me, and motivates me to refocus on the feedback loops in my eLearning product. Hope it does the same for you.

Friday, June 24, 2011

How much should eLearning cost? Less.

The downward pressure to keep costs low is universal, whether you're talking higher ed or corporate training or eLearning as your product. The reasons are not all economic. There are expectations, I believe, rooted in natural but often unexamined assumptions. If you could tune into the internal monolog of your CEO, CFO, President, Provost, you might hear something like this:

"It costs what? Are you kidding me? I can pay a knowledgeable, competent person a couple of thousand dollars to develop a course and teach it in a classroom, and they're glad to have the money. Why do I need to pay ten times that amount to put the same thing online?"

Maybe that's not an internal dialog in your world; maybe it's painfully external. Regardless, the question is, what should it cost? And if it really costs 10 or 20 times as much to put it online, why does it cost so much?

I've developed a little chart to help explain what's happening here. Not to explain the cost of instructional designers or learning management systems or multimedia or video streaming. It's to help developers think about where they put their money. Because the reality is, when you decide not to invest in development, you are deciding to invest in delivery.

Take a look. What Line 1 says is that if you spend very little on development, the way you do when creating a face to face class, you're going to end up spending a lot on deployment, on delivery, unless your target audience is very small. Repeating that class over and over, paying trainers, faculty, maybe travel days, airfares... these costs become enormous over time. And if you just throw the course online for low costs, using the old "shovelware" approach, you'll pay through the nose for support costs and redevelopment, not to mention attrition.

Line 2 tells you the opposite. If you spend more on the development, creating it the right way online, you can deliver it to a lot more people a lot less expensively. And of course, a lot more consistently.

So you make the choice. Making it with eyes wide open means that you can and should choose your model, and adjust both the development and delivery costs to suit your needs. The classroom, Face-to-Face model gives you the lowest costs for development. The online, Self-Paced model gives you the lowest costs for delivery. Neither one is a good model for everything, but both are effective for something.

And then of course you can mix and match the models, building self-paced components into your online cohorts, or adding virtual labs or low-touch facilitation to your self-paced products.

My point is not that one particular model is optimal. My point is that if you are going to develop and deliver learning of any sort, you are going to spend your money somewhere. Spend it wisely up front, and you can lower your deployment costs and increase predictability. Don't spend it up front, and you are locked into the high cost of delivery.

Thursday, June 2, 2011

What you believe affects how you learn.

The headline says, "Disbelieving Free Will Makes the Brain Less Free." And the story line is simple... get people thinking about the possibility that their own unique ability to choose is compromised by genetic determinism, and they will do poorly on a "readiness" test. There are several interesting questions to be asked here, and several profound implications for learning.

1. What you believe affects how you perform. Want to enhance performance? Want to change behaviors? Start with your learners' underlying beliefs. And not just any beliefs, but their beliefs about themselves--particularly the "I can't do that" sort of beliefs. Don't waste your breath teaching them what to do or how to do it if you haven't focused on who they are, or who they will be, once they have mastered the knowledge and skills you're teaching. They have to see themselves as someone who can and will and wants to go where you're leading.

That may sound overly philosophical, or even arrogant. After all, you're not in the belief business. But think about it. Coming to a new belief about yourself is not necessarily a big deal or an enormously difficult process. Take a close look at something that may seem impossible right now (getting that next degree or learning to fly a fighter jet or defeating the dragon-monster on level 6), and then focus on whether or not you can see yourself as a PhD, or a fighter pilot, or the master of that video game. If you can catch a new vision of yourself, you're halfway there. You're motivated to do what those sorts of people do. Like the Marine Corps says, maybe you really can be one of them, but you first must identify with the outcomes. That's all a change in beliefs means. Who you are always drives what you do.

2. Some beliefs are clearly more helpful than others. I don't want to be Machiavellian any more than you do, but the fact is that some people's beliefs drive them forward and some people's beliefs dry them up, shrivel them all into themselves. The genetic determinism of Francis Crick, which was the bedtime story inflicted on the participants in the study, is a mind-numbingly thorough proposition that we are only the products of our genes, and there's nothing we can do about it. Not only your free will, but your very consciousness, your sense of self, is "no more than the behavior of a vast assembly of nerve cells and their associated molecules." What you think of as being "me" is nothing but neurons firing. These kinds of beliefs are clearly not helpful if you want to accomplish anything in life--or to teach anyone anything.

Well, you may ask, so what? Truth is truth, whether it is helpful or not. We have a duty to believe what is true. Yes, of course. But without getting too far off the point, let me just ask, doesn't it make sense to seek truth in the direction of life, not stupefication? In the direction of activity, not helpless inaction? This world, our universe, everything we see and feel and know to be true, is bursting with life and activity. Traveling down some dark and lonely path, away from what is robust and fertile and sunlit and active, away from the endless possibilities of life, just because someone with a high IQ once said that the Truth, capital T, is to be found in that direction--that's foolish at best, tragic at worst. When in doubt, I say, go with what works.

And besides, it's your job to make it work.

3. You don't have to change someone's underlying philosophy to change their beliefs about themselves. I was privileged to lead the charge in building an online nursing master's degree program with a great team of designers, developers, and content experts. Our audience research revealed a startling fact: while most of the candidates wanted to move up, to make more money, to get off the floor where the hours are long and the work is backbreaking, they also felt guilty about it. Their shared value system, what it means to be a nurse, was tied up in being a care-giver, in advocating for patients. They feared that by becoming managers or educators, the two career paths opened to them by our degree, they would lose this.

So we spent the first part of the orientation course showing them that in fact their reach would be extended. Far from abandoning their mission, they were now on a path to expanding it. This simple effort, probably no more than twenty required minutes of a two-plus-year degree program, made all the difference. We addressed their identity. We gave them the opportunity to see themselves with a new, improved identity, having a greater impact by reaching more people than they ever could before. We showed them they could stay true to their original mission and then some. And we played that theme out through the entire program. Measure that nursing program how you will--enrollment, retention rate, student satisfaction--it was an enormous success.

One more example: the military. Why is military training so effective? In many ways it is the gold standard for training, whether it's complex and computer-guided, or grunt-simple, they seem to know how to do it all well. My belief? It's because of basic training. It's that six to twelve weeks of rigorous, sometimes nightmarish activity, the purpose of which is to make you a soldier. Or a sailor. Or a marine. What is that but a very careful reformatting of the identity? I'm not saying the actual training isn't great. I'm saying that soldiers obey orders, and when the orders are to learn something they learn it. This is why applying military training to civilian operations sometimes leads to less-than-stellar results. It's not the training so much as the people being trained. You and I don't get to start with 6 weeks of boot camp for all our learners (unless, of course, you do). But we can all tie whatever our learners learn into their basic belief systems.

So whether you are training people to put widgets together or educating them to generate ideas to save the planet, focus first on how they think of themselves. Let them see themselves as a widget master, or as an idea generator. Take the time to make sure they have fully identified with their own outcomes. It will pay off enormously. What they believe strongly affects how they learn.

Sunday, May 22, 2011

The next wave of innovation in higher ed.

"An organization simply cannot disrupt itself," asserts Clayton Christensen and the authors of Disrupting Class---->. If you're going to innovate, really innovate in the way that iPods or PC's or online day-trading innovated, you're going to have to do it outside the usual boundaries. This, of course, is what the online for-profit universities did, working outside the traditional ivory infrastructure to reach a huge customer base that was absolutely not going to stop working, move near campus, and go back to school full time for that next degree. The traditionals didn't want those students, and so the for-profits went after them with online classes and office-park classrooms. The classic disruptive innovation cycle began. That cycle continued on its normal trajectory until last summer when the traditionals, with their governmental power base, turned on the for-profits with punitive regulations, most notably the "gainful employment" clause. If you can't beat 'em, regulate 'em out of existence.

That punitive effort, as has been stated here before, seems bound to crater. It's always felt a bit last-ditch to me, and now The Hill reports that it's in serious bipartisan trouble. It seems you can't take away from millions of Americans the best path to a better life that they have ever known, without creating an uproar. So the classic cycle of disruptive innovation rolls on, right? The innovators target a new market that the old guard doesn't want, provide it with a product that changes the landscape, then the old guard eventually embraces it or pays the consequences.

Except that something odd is happening in higher education. The for-profits have stopped innovating.

Reeling from the regulatory crack-down, many have spent their time, money, and energy fighting back with lawsuits and other legal maneuvering. But what is the basis of their legal arguments? If you pull back the covers, you find this: You're treating us unfairly, singling us out, and really, we're not any different from them. And what this means is that behind the scenes, the one-time innovators are scrambling to distance themselves from their innovative roots, straightening out any wrinkles that may make them look, well, unseemly (Innovation? Us? No, no, we're just like them!). Now you have former-innovators working hard to fit in, and to become organizations just like their peers. At least one for-profit that was actively fighting the "gainful employment" rule went full circle and converted to non-profit status earlier this year. If they can't beat you, join 'em.

So if an organization simply cannot disrupt itself, what does that mean for higher ed? It means opportunity for some new innovators.

Did you know that Compaq invented the iPod? Or rather, the first palm-sized digital "jukebox" with enormous storage capacity? I just learned that, thanks to an article I stumbled across in C-NET Reviews. Compaq beat Apple by 3 years and still lost in the marketplace. Compaq was simply not prepared to make their own innovation a centerpiece of their business strategy. Apple, fully familiar with the idea of making the most out of someone else's invention by perfecting it, and its business model, was well prepared to launch into a business that was not originally theirs.

So who will the new innovators in higher education be? Who are those who are watching all this unfold, and are ready to take what has been done so far to the next level? I had lunch once at a conference with two of the product people who worked on Apple's iTunes/iPod system. "We couldn't believe Sony wasn't already doing this," one of them said. "We were in a hurry," the other chimed in, "because we thought that before we could get our product to market, one of the big entertainment/technology companies would already be there." Apple couldn't have been more right about their product, or more wrong about their competition. Not only was Sony not there, the media giant and one-time innovator (remember the Walk-Man?) would line up on the other side, working to protect their portfolio of artists from the revenue squeeze that the mp3 revolution brought about.

The new innovators in higher education are not in the spotlight right now, but they are not on the sidelines, either. They are working hard as all this plays out. They have accreditation. They understand in ways that most of the original, big-name for-profits never did that quality is key--educational quality, product quality, business quality, and academic rigor. Like Apple, they understand that perfecting the business plan and the product means huge opportunity. Blackboard and streaming lectures are 1990's technologies. The future is differentiated learning, using the power of technology to mass customize education and prove outcomes beyond a shadow of a doubt.

They're waiting. But knowing how innovators tend to think, you can bet they won't wait for long.

Monday, May 9, 2011

A better business model than the shell game

It doesn't have to be a full-on, for-profit business model, it just has to be better than the current shell game. Inside Higher Ed reports that a number of state governments took stimulus money in 2009 and poured it into higher education, while actually cutting higher ed budgets. And those cuts are going to be exposed very soon.

The stimulus package "opened up the ability for states to reallocate dollars away from education and mask it with federal money." Why did they do this? Because overall budgets were crunched and the feds were offering windfalls for education. It seemed good to state lawmakers to take their own local tax revenues and redirect them into budget areas that weren't being so lavishly supported by Washington. A shell game? Robbing Peter to pay Paul? Call it what you will, but don't call it a sound business plan.

I suppose it's barking up the wrong tree, or perhaps the wrong tower, to make the modest suggestion that universities (and state legislatures) might examine their education revenues and plan ways to increase them without relying so heavily on taxpayers' dwindling dollars. I understand the monumental nature of this suggestion, but I also know that there are places to start. Like, perhaps going to online options that can actually compete with the online, for-profit brands.

After all, the for-profit schools have managed to do well enough with a model that creates a positive cushion between expenses and income, even while growing. Yes, growing! The state schools (surprise) lose money on every student, so when their budgets are cut, they have to consider reducing enrollments. Unlike anything at all in the private sector, the solution is not more paying customers--it's fewer. Think about that a moment, and take it to its logical extreme: State schools would be at their financial best if they had no students at all. Amazing, but unfortunately, quite true.

Any business model improvement would be a positive one for the university system, and for our economy, and for our wallets as taxpayers. As it is, some mad scrambles are ahead for many university systems. Maybe someone, some brave soul somewhere within academia, will think about the free market as a possible solution. And if not, maybe a legislator or two will speak up?

Wednesday, May 4, 2011

Final Part, The best instructional design models for today...

Now that we have clarified the question (part 1), conducted a Discovery (part 2), and framed up our media, limitations, and objectives (part 3), we can now answer the original question: How do you go about choosing the best Learning, Instructional, Delivery, and Assessment models--given all the technologies available for eLearning today?

Define your models. These models all overlap one another and inform one another, and none of them stand alone. They should be considered together, as significant parts of a single whole.

The Learning Model. This is the path your learners will take as they navigate the experience you have defined for them. Let's start with Gagne's Nine Events:

1. Gain attention
2. Inform the learner of objectives
3. Stimulate the recall of prior learning
4. Present stimulus material (content)
5. Provide learner guidance
6. Elicit performance
7. Provide feedback
8. Assess performance.
9. Enhance retention and transfer

Other research-based events:

--State the purpose of the learning
--Model the behavior/skill/application
--Provide guided practice, without assessment
--Provide student-centered closure
--Provide opportunity for self-reflection

Each of the above events is backed with solid research findings that show an increase in learning and retention if they are included. In order to generate a learning product that has the best chance of producing the desired learning outcomes, pick your events and put them in a standard order. That's your Learning Model. Each lesson, each unique module, should follow your chosen pattern, your Learning Model... until you decide that a lesson or module should follow a different one. Then define that one.

And let me just say that, as someone with a deep media background who understands impact, I can absolutely vouch for the power of these steps. If they didn't work, I never would have started using them--I'd have just added more video, more animation, more of that powerful media stuff. But in fact, getting this right is the single most effective thing you can do to increase learning. No exceptions.

So. By carefully choosing the order and matching your media with these events, it is possible to include almost the entire array of instructional options ever conceived, from the case method to discovery to self-paced and differentiated learning. You can also include group work, projects, discussions, whatever learning methods are appropriate. They all fit, because they can be used to achieve learning events. There is art to this, absolutely. Talent and experience are very helpful companions here, but the threshold is low. Creating a Learning Model is neither rocket science nor the Sistine Chapel ceiling.

The Assessment Model. Notice Gagne's event numbers 8 and 9... they are about assessment. This is why I consider the Assessment Model to be part of the Learning Model. How will you assess that the objectives have been met? Decide it early, and build it in. It is your proof of success. Most technology platforms have test engines, which suffice for most uses. High stakes assessments are more science than art, however, and creating reliable ones will require some psychometric expertise. Factor that in if you need it.

The Instructional Model. Did you notice that while Gagne's events define learner activities, they are actually written from the instructor's point of view? He wrote them in 1965, a classroom-only era... and in those classrooms the space between what the students were asked to do and what the instructor was doing was very, very narrow; pretty much two sides of the same coin. Online, though, the space between can be as wide as you want it. Self-paced instruction is one end of the spectrum, where there are no instructors at all. Live videoconferences or webinars are the other extreme, pretty much a traditional live grouping, but instead of being enclosed by walls all are connected by the web. Your Instructional Model defines what you want the presenter/instructor to be doing, event by Gagne event.

The Delivery Model. And here is where all the above comes together in your technology. Here is where you find ways to use all the exciting new technologies at your disposal. Or not. What you want to do now is to balance all that you want to accomplish with your audience, all your media choices and your constraints, and (this is a really, really important moment) build your Learning Model into your technology. And particularly, into your user interface.

The Learning Model cannot fight the technical infrastructure. If you want to be successful, your Learning Model needs to work within the actual user interface, it must become one with the learner experience, and be facilitated by all the screens and prompts and media that your learners are facing. Your goal is to build the learning model into the technical interface. At its best, the technology enhances your Learning Model at every turn, and wonderfully supports your Instructional Model (what the instructors are doing) as well.

Wait, I already made my media choices, you may be thinking. Doesn't that define my user interface? No, the two are not identical; your media will always be a subset of your user interface. While participants are watching a video, for example, that video and its controls are the user's interface. But how did they navigate to the video? Where do they go after watching? How do they know where they're going next and do they know why? All of that, all the steps that move learners through the entire experience, add up to your User Interface.

Everything about the student's pathway through the material, then, even their navigation panel or menu, should reflect your order of instructional events. And that order of events is, of course, your Learning Model. My teams and I have gone so far as to name the buttons on an interface after Gagne. We have had students, for example, click on "Recall Prior Learning" at the appropriate point in the lesson.

So about all those new and different technologies... how do you factor them in? You've already chosen your primary media, so you should at this point have a standard media approach enfolded into your standard Learning Model. If you've decided that podcasts are key to delivering your message, for example, then you are probably going to use them for many of the first four or five events on Gagne's list. But this should not limit you. At each point in your Learning Model, at each event, you are plugging new content into what is in effect an outline, right? So keep an open mind about what would be effective. Maybe a video will help now and again. Maybe construct a little interactive element. It doesn't have to be your primary media every time; if your technology can deliver it, and you can build, borrow, or license it, factor it in where it will help. With a solid Learning Model, everything is possible if your technology will support it.

What if you can't make your delivery platform perfectly reflect the Learning Model you want? It means that you need to start adjusting each of your models, based on all you know and all you want to achieve, so that they all work together as a whole. Each model informs and often limits the other models, and you will be making trade-offs. But your mission is to make all of the models fit together, beginning, middle, and end, providing your learners with a unified experience, driven by the most powerful media you can manage, to the end that your message is has a measurable, effective impact.

So that's my answer. It wasn't as thorough as it could be, nor as concise as I would have liked. But if you want to get the bang for the buck, then this is how I recommend going about it. At least, it's how I do it, and it's been very successful in many arenas, with many technologies, over a worthy period of time.

So to my original inquisitor, and any and all with questions in this area, I hope that helps! (And don't be such a stranger... you have my contact info.)

Sunday, May 1, 2011

Part Three, The best instructional design models for today...

The question on the table is (still) what are the best Learning Models, Instructional Models, Delivery Models, and Assessment Models for any given training or educational need--given all the technologies now available? Or more succinctly, how do we determine the best way to design eLearning and hybrid learning experiences?

Let's break down the decision-making process. We are assuming that you have by now sorted out exactly what your learning product needs to accomplish (more on that in my previous post). So here are your next steps, in order:

1. Define your media. Whoa. What? Start with media? Well, it is a bit counter-intuitive to start by talking about the media you'll be using before you settle on those all-important learning and instructional models, but that's what I do. And here's why. Making the media decision early requires you to know your audience and your message intimately, and then allows you to tuck those media decisions under your arm and carry them with you through all those important decisions you'll be making later.

Remember Marshall McLuhan and his "media is the message" message? What he said is more true today than it was 40+ years ago. You have at your fingertips the ability to supercharge your message to this audience by matching it with the appropriate media. Match it well, and you are riding the media wave with all the power that implies. Mismatch it, and your message can get lost, dragged out to sea in the unpredictable undertow where poor training and education are cast away. So how do you choose your media?

Know your audience. Defining media starts here. When I say know your audience, I don't just mean demographics. Learning is about change, and change is about motivation. And motivation is about dreams and goals. Who they think they are and what they want to be, how they think of themselves, how they want to think of themselves--this is your sandbox. Your message, your learning product, your "Y factor" (as defined during your Discovery) is going to address these core identity issues, or it's going flail about, sputtering for help.

Know your message.You know what you want to accomplish and you know your audience. Now take a close look at your message. Certain messages are better delivered through certain kinds of media. Put another way, the content type drives your media choice.

Here is my list of content types*:

1. Factual Knowledge (facts, data, vocabulary, formulas, schema)
2. Conceptual Knowledge (principles, ideas, theories, models)
3. Procedural Knowledge (skills, techniques, methods, processes)
4. Contextual Knowledge (strategies, tactics, applications, problem-solving)
5. Cultural Knowledge (values, mores, systems, empathy)
6. Motivational Engagement (goals, drivers, desires, expectations)
7. Identity Engagement (self-worth, self-perception, purpose)

Take a look at your "X factor," the big results you expect from your learning intervention. Look at the list above. What kind of knowledge are you trafficking in, if you are going to achieve those ends? There will likely be more than one kind.

Choose your primary media. Now that you know both your audience and your message, choose your media. What are your options? If you're reading this on a fairly recent desktop or laptop and a web connection, you could probably create any one of the following using this sentence as content--the sentence you're reading right now--and you could do it before you stand up: Slides, narrated slides, video lecture, audio lecture, web page (in a social network, at least), blog, tweet, journal entry or report, email, screen capture, photo with captions, photo with narration, phone interview, video interview, spreadsheet, or graph. Take a look at your "Applications" directory and you will likely come up with a similar list--probably adding several more. I'm not saying you have to do-it-yourself. Professionals can help you pack a punch. I'm just saying you have lots of options, and you should consider them.

Does it matter which media you choose? Look at the various media available to you right now, this instant, and you can see how they would not all be equally helpful for each of the above-listed content types. You can see how a bar graph might not stimulate motivation, or a phone interview might not be the best way to teach visual procedures. Right? And that's the point of deciding on your primary media early. If you want to focus your message, the art of matching media with it is crucial. These decisions can evolve, but right now you want to make an early decision about your primary media types. Over the years I have developed a methodology for just this purpose, for this critical moment--it's basically a matrix that matches content types to media approaches, and it's ever-changing.

As you begin to look at your constraints and your learning objectives, you can now keep your compass pointing toward that powerful, magnetic pole that is your chosen media.

2. Define your constraints. This is the opposite pole from selecting your media, the negative to all that positive energy. But it's highly necessary, especially now, to avoid going over budget or just overboard.

Financial Constraints. Fortunately, you ended your Discovery Process by defining that hole in the bucket through which your organization's investment is disappearing--the hole your solution is going to put a cork in. How much is that cork worth? You don't have to be an accountant; a little back-of-the-napkin figuring can project out those excess costs and lost revenues over the next five years. Is it worth investing a fifth of that amount now in order to stop the bleeding? A tenth that much? A twentieth? Likely that's all you'll need, probably even less. Get agreement on the approximate size of the cork, and you've got your first limitation defined.

Technical Constraints. Technology is a second limitation, one that is always related to the monetary one. But often, there are non-financial reasons that certain technologies must be taken off the table. There are two sides to the technology constraint: users, and providers. What do the learners have already? The answer is usually easy; companies know what is available in the workplace, and universities know what is already required of their students.

The provider side of the technical constraint is a bit trickier. Often limits are strictly defined by an IT or an IS department, in terms of hardware and software. I "go to the cloud" whenever possible, to avoid the headache of internal installation and maintenance of software, and to sidestep political headaches. But like pollen in the springtime, these are not always possible to avoid.

Expertise Constraints. You will likely have limits put onto your effort by both the development subject matter experts and the instructional subject matter experts--those who build it, and those who teach it. Who are your SMEs, and how do you access them? Define this early. You may face some financial limitations here, but the biggest bottleneck will likely be their time. Speaking of which...

Time Constraints. Not just your subject matter experts, but the time constraints on your students must be considered. What do they have time to do? Can they spend an hour a day? Four hours a day? Eight hours a week? Learner schedules will have a significant impact on your models.

3. Create your primary learning objectives. Now instructional design begins. Many people start here, and of course starting with your learning objectives sounds logical and right. But as mentioned, I have found this step much more productive when you know your two opposite poles: your base-line media and your primary constraints.

Carefully define the highest level objectives, and write them in terms the learners will understand and embrace. (Some call these "terminal" objectives, but to me that adjective calls up images of death and layovers, both of which I prefer to avoid.) These are your learners' sign posts, and yours. These are the highly visible standards that will be raised at the beginning and measured at the end, announcing to one and all the common goals of the learning. You will have lower-level, supporting objectives that will be defined later, when the content is being structured and organized. These sub-objectives are often called "enabling" objectives. (But I don't use that term either. I'd rather be supportive than be an enabler!)

Write your objectives down in plain language, with your students as subject: "By the end of this learning experience, participants will be able to... [remember, explain, apply, analyze, evaluate, create...]." Whenever I develop objectives, I find it amazingly helpful to overlay Bloom's Taxonomy (revised version--click the link for Iowa State's very cool, interactive visualization), to help determine what the learners actually need to do with the content.

Ah, this has gone on too long again. We will get to the Learning, Instructional, Delivery, and Assessment models next time. I promise!

* Numbers 1-3 on this list come from the cognitive knowledge types in the revised Bloom's Taxonomy by Anderson and Krathwohl et al. Numbers 4-7 are my own subdivision of their single meta-cognitive category into common types of self-awareness that are well known to me, and to other learning professionals with whom I have worked. I have also baked into this list all three domains: psychomotor, affective, and cognitive (But that's grist for another post!).

Tuesday, April 26, 2011

Part Two, The best instructional design models for today...

Okay, continuing my effort to answer the question from my previous post, we start with this revision: Given the wide variety of technologies now available, how would you go about recommending the appropriate Learning Model, Instructional Model, Delivery Model, and Assessment Model for any given training or educational need?

First, you need to know exactly what you're trying to accomplish, not just for the learner, but for the organization as well. So I'm going back to the ADDIE approach to instructional design, starting with A: Assess the need. To get to the right model, you need a Discovery. Often, time is of the essence, and a formal research project is out of the question. So my own customized take on several standard methodologies, designed for speed and accuracy, goes something like this:

Discovery Process
1. Get the Story. This is not science; it's Journalism 101. What is the business problem, what is causing it, and what do people believe will solve it? Who, what, where, when, how, and most importantly, why? At this stage, you're strictly about interviewing those who should know, gathering what is likely 90% opinion. But a few really good questions up front can move you closer to the right solution with more efficiency than any single thing you can do at any later time.

Business example: "The employees are not getting the job done," becomes, with good questioning, "The salespeople don't know how to sell our latest product," which becomes, "Our front-line sales are off 20% over the last three quarters and we think it's because our new salespeople do not understand the product."

University example: "We need our online MBA revised in order to compete with Big For-Profit University," becomes, with further questions, "Our courses are highly idiosyncratic and uneven, one to the next," which becomes, "Our faculty never quite pulled together as we would have hoped, so we don't have our best product out there."

2. Get the Data. Still not science; still Journalism 101. Check the facts. Whatever actual data you can find, whether it's sales numbers, marketing responses, employee retention, whatever sheds light on the story, use it. You also want to talk with key stakeholders for some of that softer but powerful input--dispatches from the front lines, both personnel and customers. Many times their input has been colored and filtered by the time it reaches the stakeholders higher up.

Business Example: "We think our new salespeople don't get it," becomes, once you see the data, "Salesperson attrition has climbed by 15% and managers are hiring anyone they can find," which in turn becomes, after interviews, "We are attracting neither customers nor employees since we launched our new product."

University Example: "Our faculty are not pulling together," becomes, "Student complaints and course drops are high and rising," which becomes, "We have never had a detailed budget or a solid design plan for the MBA," which becomes, "Every faculty member wants it their own way, while the students want consistency."

3. Write the Story. Maybe I should have just been a journalist. But now, you document what you know in a narrative. You're still the reporter, so make it both factual and pointed--and short. If you've been consciously and obviously focused on reality and not politics, you will have become the expert--an objective "outsider" with a clear grasp of what's really going on.

4. Publish the Story. Maybe I've taken the journalism metaphor too far. True, you are not really publishing it, but you are getting it out to the stakeholders so that they can see and verify your sources, facts, and conclusions.

5. Create the Equation. Here's where it pays off. Assuming you get the nod of approval for the story, now you get to the science. Or at least, the math. My equations look something like this:

If A + B = C, and we need C + X, then A + B + Y = C + X.

This is not really algebra, but a mathematical metaphor. A and B are data-backed realities, the facts of the story that are leading to the current poor result, which is C. So A + B = C defines the current problem. In the business case, A is a change to a downmarket product, B is a mental/emotional disconnect between the brand promise and the new product, and C is the loss of loyalty among employees that is hurting sales. In the university case, A is the inconsistency of the program course to course, B is the desire of students to have the consistency they signed up for, and C is the dropout rate.

Now for the solution: C + X is the current state C plus whatever is necessary for success, the X factor. In the business case, X is the return on investment now improved by alignment of brand and employees. In the university case, X is the improved retention rate and associated dollars.

So what is Y? (Drum roll) Y is what you're adding to the equation through learning. Y is your business, your baby, the learning answer to the organizational problem (Trumpet fanfare). In the business case, Y is what you do to change the perception of the employees about the new product. In the university case, Y is the redeveloped MBA, with a specific budget, and with general consensus. So whatever you're currently doing, add Y into the mix on the front end, and you'll get the solution you need:

A + B + Y = C + X. Current inputs A and B plus new learning initiative Y equals current state C with improvement X.

This equation can be written in any way necessary, in order to get to the solution. Though this is a metaphor, I should point out that this equation can carry the punch of real math if you can add the actual dollars to it--even in broad brush strokes or ballparks. Show what the current state is costing, and what the end state will save/generate.

6. Confirm the Equation. This step is the reason for the previous step. By putting the problem into a formula, you can focus on it objectively, get stakeholders to agree not only on the problem, but on the solution. And on the ROI of the solution. When you get confirmation here, everyone knows the kind of dollars that are at stake. You need this on paper, in black and white (or black and red, to be more symbolically precise), because the solution you build is going to cost something.

7. Build and Implement Y.

Hey, we're finally ready to start looking at the best Learning Models! So what did you gain from your Discovery that allows you to make that all-important Learning Model decision?

I'll get to that (I hope) next time.

The best instructional design models for today...

"What are the best instructional design models for eLearning today?" I was asked this recently, and it gave me pause. It's a hard question to answer, because the question is somewhat... tangled.

Considering that the art and science of instructional design has been around since World War II and the "Training Within Industry" initiative, and considering that lesson design has been shown to directly impact learning since at least Robert Gagne in the 1960s, I continue to be impressed by how few of the basics are broadly understood.

So let me unmuddle the question just a bit, then provide an answer. First, the term "instructional design model" refers to the process used to design instruction. The big kahuna of ID models is ADDIE, of course--Assess, Design, Develop, Implement, and Evaluate. When you want to build instruction, if you do each of these things in order, and do them carefully and well, you will likely end up with a serviceable result. But ADDIE flies at 30,000 feet and leaves much detail obscured, so there is plenty of room for pilot error. So I like to blend ADDIE with Rapid Prototyping, in which you quickly get learning chunks out to a slice of the target audience and let them provide feedback. It can only help, and sometimes it can save the day.

But my inquisitor in this case was actually unconcerned about instructional design models. It turned out the question was really about "learning models." A Learning Model is focused solely on what the learner does, in what order, when, and how. It has implications for everything else--instructor, technology, assignments, content, assessments--but it's not primarily focused on them. It defines a standard process for the learner. I put the Learning Model at the center of all instructional design because its heart and soul is the one thing that really matters... the learner.

Every learning event has a Learning Model, even if no one has defined it, and even if it's not very good ("Death By PowerPoint" is a common one, though not a personal favorite). But even the good ones can be all over the map, from variations on the case method to apprenticeship to more standard classroom lesson approaches. A good starting place for understanding what a Learning Model is would be Robert Gagne and his "Nine Events," which started the whole focus on structuring learning in order to improve outcomes, and which still holds a place of high esteem, because what he developed still works.

So, the best Learning Models for today... what are they? That answer has to be filtered through both the Instructional Model and the Delivery Model. And what are they? Well, to clarify terms, an Instructional Model defines the standard process and activities of the instructor, including such basics whether or not the instructor is human (self-paced online learning, for example, does not require the "instructor" to have pulse or respiration). It is to instructors what a Learning Model is to learners. Every public speaker has heard of at least this one: "Tell them what you're going to tell them, tell them, then tell them what you told them."

The Delivery Model then defines the platform, whether it is face-to-face (F2F), technology-enhanced, online, or any combination of the above. It also defines which specific technologies, by brand name, are being used to achieve the desired results. Hybrid models are popular, but not always possible. In fact, most technology platforms severely limit the Learning and Instructional Models. The unhappy result is that often the Delivery Model, which should be the last one chosen after the other models are defined, wags the dog.

And speaking of the dog, the answer to my questioner's question should also be shaped by the Assessment Model, the manner and mode of determining how effective the given learning opportunity has been. This could arguably be the most important model of them all, but in order to avoid that discussion I like to include it as a subset of the Learning Model. For this critical component, like many other practitioners I go straight to Donald Kirkpatrick's classic "Four Levels" of evaluation. Unlike the rest of the Learning Model, the Assessment Model can be, and too often is, completely absent. Those who pay little attention to Learning Models often ignore assessment as well (So if "Death By PowerPoint" is the Learning Model, I say don't bother with Assessment. The dead are notoriously poor test takers.)

So now, if I were to restate the question in its untangled form, I would put it like this: Given the wide variety of technologies now available, how would you go about recommending the appropriate learning, instructional, delivery, and assessment models for any given training or educational need?

Ah! That's a question I like!

But now I'll need another post to answer it.

Friday, March 25, 2011

What for-profit online higher ed fears most.

Do you know what the online, for-profit higher education folks fear most? Not regulatory changes. New regs, even malicious regs, cannot drive them out of business. Shareholders generally do not accept excuses and cries of "unfair!" when their investment is at stake, so business pressures will 1) drive quality improvement to meet any new standards, and/or 2) drive them out of money-losing markets and programs, into others that are more lucrative. Eventually, Gainful Employment will blow over, the political winds will shift, and there may even be a backlash. This won't keep the online for-profits down long.

So what do they really fear? Not competition from one another. They're used to it. They are compared constantly to one another by Wall Street analysts. They don't fear the irregularities of the stock market, although they respect it highly. They know the game they are playing and they play it well, generally. They don't fear changes in technology. Higher ed is not expected to be on the bleeding edge, and you can see these technical waves coming years in advance and plan to invest in them. They don't fear globalization in any form; they welcome it. They need to learn a bit more about international standards and customs, but otherwise they are poised to reproduce the same profits they've had in the US in other parts of the world.

They really fear only one thing, the one thing that could bring their world to a crashing halt for good and all. They fear that the traditional schools, the state schools, the non-profits, will embrace sound business principles regarding product and service. Not marketing--the non-profits have name brands that sell themselves. You don't have to create brand awareness for the University of Texas or Ohio State like you do for Walden or DeVry. No, all the traditionals need to do is to embrace quality product and world-class service. All they need to do is have a very good online experience for their students, from enrollment through completion. They need courses that make sense with one another, that work on their own and in context, that are intriguing, that are all equally demanding, equally excellent. If and when that happens, it will spell the end of the for-profits' dominance in online higher ed. They may in fact shrivel up and melt away.

"Ha!" you say, "If that's what they fear, they should have no fear at all!" Well, that might have been an easy enough answer five years ago, but times change. I suspect that the current brouhaha over regulatory squeezes is actually the last gasp of an old order trying to use its sheer power to crush a new order, one that has in fact grown too big to crush. That type of last-ditch, desperate action tends to backfire (see: Libya, Egypt, et al). There are voices, stronger voices every day, within and among faculty at the non-profits, telling whoever will listen that their online product is not up to the standards of a world-class organization, and it should be.

And those voices will continue to be heard, because they are the younger voices, those who grew up online, or at least raised their children online, and who know that their own online product pretty much (excuse the non-academic vocabulary) sucks. You can't have every faculty member putting his or her own course online in any way he or she wants and expect the result to be anything but a mess. "Academic freedom!" faculty cry, as they poison their own wells. At some point, faculty know they are starting to sound like the teenager who refuses to mow the lawn on the basis of his ecological principles, while filling a Sasquatch-sized carbon footprint with 20-ounce empties. Demand total control of your online course if you must, but just know that doing so leaves you no grounds to complain about an amateur-looking website, or a confusing textbook, or even a cheaply made suit, ever again.

Because here's the thing. The best of the for-profits have already figured it out. The Waldens and the Capellas know how to preserve academic freedom while building an excellent product and offering a great service. They know it can be done; they're doing it. They've been doing it for a long while now.

It just takes a little bit of concern for the student's experience. It takes a little bit of unselfishness, just enough to admit that maybe I, as an individual faculty member, don't know absolutely everything about the content I'm teaching, the technology that elearning requires, the media that can make it more powerful, the student experience that can be exciting and immersive, or the wide array of online learning models now available. It takes just enough unselfishness to say, hey, my students could actually benefit from a course built with the engagement of other qualified faculty, the inclusion of world-class experts in my field who don't happen to work at my university, the work of media developers who know how to make a video or an animation that truly rocks, the contribution of instructional designers who have creative ideas for assignments and group engagements, and an investment of tens of thousands of dollars (or more).

What the for-profits know is that when presented with this sort of logic, from people who know how it's done and have done it, faculty respond positively. They tend to have an "aha!" moment when they realize that they can in fact lead the academic charge and protect academic integrity without the side effect being that students suffer only and always with whatever limited product an individual faculty member can dream up late at night on a laptop using Blackboard. Even with a support staff, that model cannot compete with a model that puts the right team of professionals on an equal footing with the right faculty. (Not an equal academic footing--but an equal product development footing. The difference is critical.)

Why don't universities invest in online programs the way they need to, in order to compete? Possible reasons: 1. They don't understand how to build a business plan that will in fact return their dollars two, three, or ten times over. 2) They don't have dollars allocated for that kind of investment anyway. 3) Faculty who stand for academic freedom will never stand for the collaboration required to build professional-quality product. 4) They already tried something like that here, and it failed. 5) They represent an academic institution with scholarly goals, and do not like to lower themselves to "productizing" intellectual pursuits.

The online for-profits look at that list of reasons and smile nervously. They smile because it represents the levee that is holding back the flood that would otherwise engulf them. They smile nervously because they know there is nothing on that list that is of any substance, at all. For business people, those are tissue-paper-thin excuses, the last refuge of the lazy and the incompetent... and they know that universities are full of people who are neither lazy nor incompetent. They also know there are professionals out there who know how to do all those things, and have done them all (and sometimes blog about them), and any university who wants to can make it work.

The online for-profits know that the levee holding back the flood is in fact an earthen dam made of old tires, marsh grass, and mud. They know the rain is still falling. The thunder is growing louder. The lightning is flashing out its warning. And the water is still rising.

The dam still holds. But for how long?

Monday, February 28, 2011

Deeply Impersonal: What "The Social Network" says about social networks

The Oscar dust has settled, and "The Social Network" cannot lay claim to being the Best Picture. So before it's covered completely by that settling dust, here's my review of it. Not of the movie, which is clearly pretty good by anyone's standards, but of what the movie says about the role of the online social network within the world of real social networks, and what that in turn suggests about online learning. (Just go with me here.)

Five observations, in random order:

1. You don't need any social skills to have a hundred million friends online. The social Neanderthal named Mark Zuckerberg in this movie makes it crystal clear that Facebook does not actually create, or connect, "friends." It simply creates online connections between people. This is an important distinction when trying to determine how to use social media in learning: Dating sites create dates; social networks create updates; learning platforms create learning interactions. Once again we learn that you get out of it what you program into it. So if you've been worried that your elearning engagements don't have a News Feed or a Friend Request or a Status Update feature, take a deep breath. It's going to be okay.

2. Maybe the natives were right, and photographs really do steal your soul. There's a line that goes by quickly in the mid-to-late part of the movie, in reference to the photo-sharing app within Facebook: "Now it won't be enough [I'm paraphrasing] to go to a party... you'll have to go with your camera so that you can relive the party online with your friends." This is of course what has come to pass. I have been to a party where young people were dead on their feet until someone raised a cell phone camera, at which time everyone in range became happy and cool--until the picture was snapped. Then they returned to their waking slumbers. We've all been in classrooms just like that party. Let's make sure our elearning classrooms, content, and activities connect students to the real world in new and different ways, rather than disconnecting them from it further.

3. Social networks operate on a deeply impersonal level. What caused "" to bring down Harvard's servers? Guys comparing two girls they knew and deciding which one was hotter. Something in that activity struck a deep chord, but it was deeply impersonal. Facebook of course was much more sophisticated, but that underlying chord remains. There are no consequences to actions or thoughts online. The consequences actually play out in the real world. Application to elearning? How about this: Because you can separate your students from the physical reality of the actual world, you can put their heads in places that they just can't go safely otherwise. Think role-playing, simulations, even the tried-and-true threaded discussion... the topics become objectified online. They lose even the subjective connections of tone of voice, the nuances of hesitation or bold assertion, and so to a large extent they lose their social consequences. This is a great thing when exploring theories. A great thing when conducting virtual experiments. It feels real, it feels personal--but it's really not, and in a learning environment what's done can be undone; what's said can be retracted, if you allow it. Use this characteristic responsibly and it can be a very powerful tool.

4. Deeply impersonal connections pave the way for enormous viral movements. Twitter is where you share something with everyone before you share it with anyone. Let's face it, what people generally post on Twitter or Facebook is the stuff they feel comfortable talking about with everyone in general before they share it with anyone in particular. By definition, this isn't going to be highly personal. But, it is going to be highly communicable. From the original mini-boom of Facemash to the huge growth of to the global phenomenon of Facebook, viral growth happens with things that are not deeply personal, but deeply impersonal. "The question is, who are they going to send it to?" Kill-switch controversies aside, the reality seems to be that no one can stop information from spreading anymore--people are going to know what other people have to say. So if you want to take advantage of social networks for education, for learning, keep that in mind. It's not the specifically detailed objective of your training session or coursework that the social network is going to propagate. It's the overall importance--or impotence--of it. Social media is about shaping thought, promoting belief, and getting down to what's really true about the content. Or at least, what is true as that is perceived by your learning population at large. You can't control it, but you can get it out where you can see it, where you can react to it, and where you can use it to move and improve your audience.

5. The Internet is not written in ink, it's written in the fabric of the universe. That may sound like hyperbole, but the Internet is actually all about electrons, which are arguably the fabric of the universe-or close. Erica Albright's point, though, when she says it's written in ink not pencil, is that it doesn't go away. And that's true. This is a two-edged sword, with the leading edge all about the permanent value it can create. Words spoken in a classroom are gone, but you can edit, update, and display forever your wise words in an online classroom. The trailing edge (the one that's closer to you and likely to cut you) is all about the permanent damage it can do, if your social interactions are not well-considered, or not current, or not timely. This is a good thing, all in all--we are responsible for what we do and say regardless, and so the discipline of saying it well and clearly, in a way that you can stand by it forever, is a help to any learning environment.

Enough. Next year, I'll review the true story of Michael Chasen in the Oscar-nominated, runaway hit, "The Learning Platform."

Or, maybe not.