So . . . one of the topics that I am most interested in is outcomes, and wrote a post about it a couple of months ago. If you can't be bothered to follow the link, the essence of the post is that we confuse best decisions (choosing the best option prior to learning the outcome) and right decision (the best decision in light of the eventual outcome). The example is use is the decision of whether to trade three lottery tickets for one; the best decision is to decline the trade, but if the single ticket is the eventual winner, the right decision would have been to do the trade.
One comment I got about this post was that it sounded like "hindsight is 20/20." I would agree insofar as the right decision goes - we determine whether we made the right decision through hindsight. The best decision remains unchanged - it is based on probabilities, not outcomes. And the fact remains that unless we are Marty McFly or Doc Brown, we never have access to the eventual outcomes. So ultimately what we really want are best decisions, even if that means accepting that some of them won't be right decisions.
This was illustrated when I appeared on Jeopardy! a few years back. Prior to taping the show, the contestant wranglers give an hour-long spiel about all aspects of the show - rules, gameplay, and some rudimentary strategy. One suggestion that was give was to avoid jumping around the gameboard looking for daily doubles, and instead pick the clues from lowest dollar amount to the highest. Now if you could somehow suss out where they were your probability of winning would be much higher. But looking for one or two clues out of 24 is a fool's errand, because the odds of actually finding it are low. Should you jump around the board and find it, you would have made the right decision, but the strategy itself would never the best decision (interestingly, the contestant wranglers said that only one contestant in the show's history had a talent for knowing where they were, and he was very successful; a more likely explanation is that he was lucky and that luck led to his success).
But the hindsight-is-20/20 argument also assumes a different point, that we would in fact change our decision given the opportunity. There's an old joke: two guys are watching a boxing match at a bar, and they make a bet on the outcome. One of the boxers makes a big mistake and gets knocked out. The winner collects his money, but then has a pang of guilt and admits that the match was actually re-broadcast, and he had seen it already, knew the outcome, and bet on the one he already knew won. The loser then admits to having seen the match before as well. When asked why he bet on the loser, he says that he didn't think he would make the same mistake twice.
If we had access to time travel (oh, if only we did . . .), would we change our behaviour? One of Kurt Vonnegut's novels, Timequake, centers around an incident by which everyone re-lives the previous ten years of their life. The catch is that we are passive observers reliving everything, and are unable to change anything. We have to watch ourselves make the same mistakes over again. Painful, huh? But if given free will, would we change anything, or hope that this time it works out?
And if you really believe in randomness, as I do, then you should continue to make the best decisions (rather than what you 'know' to be the right ones) in such a circumstance, because the outcome is still the result of a random mechanism. Because you change your action, the result may very well change. Look at Bill Murray in Groundhog Day - he changes a few things, and the fates of everyone in the town are also changed. Marty McFly meeting his parents negates his existence (but, luckily, he doesn't disappear immediately, and has time to entertain us with his attempts to restore his being). And swapping three lottery tickets for one may very well affect the outcome.
Okay, I'm going to stop now, because my brain is tired. Hopefully this kind of makes sense.
What we don't know is usually far more important than what we do. Ignoring those unknowns, as we tend to do, can lead to bad choices and behaviour. Too often we focus on the library of books we have read, and not the ones we haven't. Look at all the books here. I haven't read any of them (although it's kind of hard to tell because they don't have titles) - have you?
Friday, September 24, 2010
Thursday, September 23, 2010
They're Acting Like a Couple of Boobs
So . . . the breastfeeding propagandists are at it again (by the way, I love coming up with unexpected opening sentences. I defy anyone to have predicted that I would started a post this way). For the uninitiated, when you have a child you quickly learn that there is a large, well-organized, vociferous group of advocates for breastfeeding (of children). These medical professionals, mothers, and other interested parties are very eager to inform you of the vast superiority of breastfeeding vs. formula feeding (common examples of the positive benefits of breastfeeding: smarter children, happier children, healthier mothers, better bonding between mother and child, children who can fly, fewer incidences of gout. Okay, I made the last two up).
I am not entering the boob juice debate here (I was formula fed, and I turned out just . . . . well, I'll let you draw your own conclusions), just commenting on the latest lunacy by a group of people with too much time on their hands. In case you missed the earth-shattering news, Old Navy had to apologize to a group of breastmilk zealots yesterday because they sold a onesie with an apparently offensive message printed on it. The lactators and their supporters (the human kind, not the lycra-spandex kind) have organized a boycott of Old Navy as a result of their outrage.
Let's get back to reality, people. First of all, are there no worthier causes for which these people could be expending their effort? I'm not saying that breastfeeding isn't important, but it certainly isn't essential (in the sense that there is a safe substitute - of course, those with extreme enough views will dispute that as well). It's a shirt. That no one is forcing you to buy.
Apparently those blogging in favour of the boycott (you know those blogging types) claim that the onesie is a tool of the formula industry. That's it - you've cracked the code and found out all of Old Navy's secrets. It's all a plot to advance the interests of the formula makers. They are going to take over the world one onesie at a time. I mean, do people actually believe the crap they write? (don't worry, I do)
The extremism involved in breastfeeding advocacy is also troubling. It is often promoted as the only way to feed your baby. The reality is that there are, in fact, formula-fed babies, and what's wrong with a onesie for them? There are plenty of mothers out there who cannot or will not breastfeed, and they are made to feel inferior. I'd like to advance a few reasons as to why the debate is so heated:
1. Mothers who have decided to breastfeed are trying to combat cognitive dissonance (the thinking of two conflicting ideas simultaneously). If formula is acceptable, breastfeeding is inferior, because it is more time-consuming, tethers the mother to the baby almost all the time, prevents a return to work, etc. Therefore, if I choose to breastfeed, it must be superior (and studies have shown it to be better, on average, than formula). The more I promote breastfeeding, the more I justify my own decision. This is the same type of behaviour as new converts to a religion vigorously promote that religion, or people starting a diet talk about how others should also get healthier.
2. The health-care field supports it because it is in their interest to do so. Lactation consultants, breastfeeding doctors (those doctors who study and help with breastfeeding, not the doctors who breastfeed), maternity nurses, and so on all receive attention and respect from the need by new mothers to breastfeed. They are placed in a position of authority and given another field in which they can act as experts. I'm not saying that doctors seek out areas to be experts but rather that people in general do that. Nor would I suggest that doctors would promote something unsafe for the enhancement of their ego. But giving up authority is a hard thing to do, and as I wrote about a few weeks ago, the medical establishment can be slow to change.
There will be extremists in any group, but as the saying goes, sometimes one side of a debate is right, sometimes the other side is right, but the extremists are never right. And if you are so pro-mommy-milk that you can't accept a cute shirt for kids (some of whom might be, gasp! formula-fed), then you've got bigger problems than this.
And hey, maybe some of those people boycotting Old Navy will instead shop at The Gap or Banana Republic.
I am not entering the boob juice debate here (I was formula fed, and I turned out just . . . . well, I'll let you draw your own conclusions), just commenting on the latest lunacy by a group of people with too much time on their hands. In case you missed the earth-shattering news, Old Navy had to apologize to a group of breastmilk zealots yesterday because they sold a onesie with an apparently offensive message printed on it. The lactators and their supporters (the human kind, not the lycra-spandex kind) have organized a boycott of Old Navy as a result of their outrage.
Let's get back to reality, people. First of all, are there no worthier causes for which these people could be expending their effort? I'm not saying that breastfeeding isn't important, but it certainly isn't essential (in the sense that there is a safe substitute - of course, those with extreme enough views will dispute that as well). It's a shirt. That no one is forcing you to buy.
Apparently those blogging in favour of the boycott (you know those blogging types) claim that the onesie is a tool of the formula industry. That's it - you've cracked the code and found out all of Old Navy's secrets. It's all a plot to advance the interests of the formula makers. They are going to take over the world one onesie at a time. I mean, do people actually believe the crap they write? (don't worry, I do)
The extremism involved in breastfeeding advocacy is also troubling. It is often promoted as the only way to feed your baby. The reality is that there are, in fact, formula-fed babies, and what's wrong with a onesie for them? There are plenty of mothers out there who cannot or will not breastfeed, and they are made to feel inferior. I'd like to advance a few reasons as to why the debate is so heated:
1. Mothers who have decided to breastfeed are trying to combat cognitive dissonance (the thinking of two conflicting ideas simultaneously). If formula is acceptable, breastfeeding is inferior, because it is more time-consuming, tethers the mother to the baby almost all the time, prevents a return to work, etc. Therefore, if I choose to breastfeed, it must be superior (and studies have shown it to be better, on average, than formula). The more I promote breastfeeding, the more I justify my own decision. This is the same type of behaviour as new converts to a religion vigorously promote that religion, or people starting a diet talk about how others should also get healthier.
2. The health-care field supports it because it is in their interest to do so. Lactation consultants, breastfeeding doctors (those doctors who study and help with breastfeeding, not the doctors who breastfeed), maternity nurses, and so on all receive attention and respect from the need by new mothers to breastfeed. They are placed in a position of authority and given another field in which they can act as experts. I'm not saying that doctors seek out areas to be experts but rather that people in general do that. Nor would I suggest that doctors would promote something unsafe for the enhancement of their ego. But giving up authority is a hard thing to do, and as I wrote about a few weeks ago, the medical establishment can be slow to change.
There will be extremists in any group, but as the saying goes, sometimes one side of a debate is right, sometimes the other side is right, but the extremists are never right. And if you are so pro-mommy-milk that you can't accept a cute shirt for kids (some of whom might be, gasp! formula-fed), then you've got bigger problems than this.
And hey, maybe some of those people boycotting Old Navy will instead shop at The Gap or Banana Republic.
Tuesday, September 21, 2010
If You Know the Cost of Everything, How Can You Know the Value of Nothing?
So . . . over the past few weeks various sports teams have announced that they will used demand-based pricing. This term simply refers to businesses allowing themselves the flexibility to change their pricing based on a number of factors. An example would be charging different prices based on who a team's visiting opponent was; it's likely that Raptors fans will pay more to see the Lakers or the Heat than they will the Hornets or the Timberwolves.
A more complex example would be the sort of pricing that airlines do. There are so many factors that affect price (time of day of flight, connections, time of year, day of week, other flights, to name just a few) that the average person would not be able to predict which flights would be more expensive, outside of a few rules of thumb. At the far extreme of variable pricing is stock market pricing or pricing for gasoline - the price floats relatively freely based on real or expected demand.
A lot of the research that I did early in my academic journey had to do with variable pricing, specifically research into how people would react to such pricing schemes. From a company's perspective, variable pricing is great. If you have a stock of something (e.g. concert tickets), being able to change your prices based on demand and supply works well, as you can get the maximum that the market will bear at any one time. But consumers may view things differently. We have become used to the notion that we may have paid more for our airplane seat than the person sitting next to us, but we aren't ready to transfer that to a lot of other contexts.
But the thing of it is that consumer decision-making tends to operate under the assumption that people are free to buy something or not; in other words, no one is holding a gun to our heads forcing us to buy. Therefore it is necessarily true that if we buy something, we ascribe to it at least as much value as its cost. If not, we wouldn't buy it. That value may not be tangible or measurable (outside of price), but it is no less real.
All of this makes it difficult to complain that we have been gouged by sellers. If you want to see a Leafs game, you still will have (somewhat) affordable options, but if you specifically want to see the Leafs play the Canadiens or Canucks, you'll probably have to pay more. And you'll only do that if seeing the game is worth the exorbitant price at the time you make the decision.
I'd go even further. Gas companies have us over a barrel - they could jack the price of gas up to $5 a litre tomorrow and would still have customers, at least in the short term. And would we willingly pay $250 to fill up our tank if that amount of money was worth more to us than the alternative (e.g. not getting to work, lack of mobility). Taking advantage? Sure. Gouging? Not so sure. Unfair? Definitely not - we're willing parties to the transaction.
When I was in my MBA program I did some tutoring on the side, and chose an unusual pricing scheme - I let my tutees (?) decide for themselves how much they wanted to pay once the tutoring was complete. I could have put a price on it, and might have done ok, but this way I did pretty well (averaging about what I would have charged anyway), and no one complained about the price, because they themselves chose it. Likewise, had I quoted a price and they paid it, they would have had as much basis to complain (i.e. none), but in that case I might have heard some grumbling.
Then again, forget what I said. By this definition, this blog has no value, because it's free.
A more complex example would be the sort of pricing that airlines do. There are so many factors that affect price (time of day of flight, connections, time of year, day of week, other flights, to name just a few) that the average person would not be able to predict which flights would be more expensive, outside of a few rules of thumb. At the far extreme of variable pricing is stock market pricing or pricing for gasoline - the price floats relatively freely based on real or expected demand.
A lot of the research that I did early in my academic journey had to do with variable pricing, specifically research into how people would react to such pricing schemes. From a company's perspective, variable pricing is great. If you have a stock of something (e.g. concert tickets), being able to change your prices based on demand and supply works well, as you can get the maximum that the market will bear at any one time. But consumers may view things differently. We have become used to the notion that we may have paid more for our airplane seat than the person sitting next to us, but we aren't ready to transfer that to a lot of other contexts.
But the thing of it is that consumer decision-making tends to operate under the assumption that people are free to buy something or not; in other words, no one is holding a gun to our heads forcing us to buy. Therefore it is necessarily true that if we buy something, we ascribe to it at least as much value as its cost. If not, we wouldn't buy it. That value may not be tangible or measurable (outside of price), but it is no less real.
All of this makes it difficult to complain that we have been gouged by sellers. If you want to see a Leafs game, you still will have (somewhat) affordable options, but if you specifically want to see the Leafs play the Canadiens or Canucks, you'll probably have to pay more. And you'll only do that if seeing the game is worth the exorbitant price at the time you make the decision.
I'd go even further. Gas companies have us over a barrel - they could jack the price of gas up to $5 a litre tomorrow and would still have customers, at least in the short term. And would we willingly pay $250 to fill up our tank if that amount of money was worth more to us than the alternative (e.g. not getting to work, lack of mobility). Taking advantage? Sure. Gouging? Not so sure. Unfair? Definitely not - we're willing parties to the transaction.
When I was in my MBA program I did some tutoring on the side, and chose an unusual pricing scheme - I let my tutees (?) decide for themselves how much they wanted to pay once the tutoring was complete. I could have put a price on it, and might have done ok, but this way I did pretty well (averaging about what I would have charged anyway), and no one complained about the price, because they themselves chose it. Likewise, had I quoted a price and they paid it, they would have had as much basis to complain (i.e. none), but in that case I might have heard some grumbling.
Then again, forget what I said. By this definition, this blog has no value, because it's free.
Thursday, September 16, 2010
The Small Screen is a Bigger Canvas
So . . . I read an interesting column in yesterday's Globe, by TV writer John Doyle. He takes the position that television has overtaken movies in providing high-quality narratives, and I gotta say, I tend to agree with him.
Now don't get me wrong, I love movies and always have. I see a lot fewer of them now than I used to (combination of parenthood and no longer working for a film exhibitor), but give me the chance to go and I will. But I also watch a lot less TV than I used to (parenthood again), and I think that there is better stuff to be found on the small screen than the large.
Now, both media have taken great strides over the past few decades. For everyone who bemoans the lack of quality movies these days, I say to you that I think the average movie is better now than ever. The best movies of the past few years (in my opinion, these include The Prestige, Almost Famous, Inglourious Basterds, 40 Year Old Virgin, just to name a few, and I'm sure I'm leaving out some that I like even better than these) rank among the best of all time. There are no movies that surpass Singin' in the Rain or The Godfather, but there weren't any in many other decades, too.
Essentially what I'm saying is that the best movies of the past 10 years stand up to the best movies of any 10-year period, but at the same time the average movie has improved. There are still clunkers (Transformers), but they're no worse than the worst movies of times gone by (Computer Beach Party, Manos: The Hands of Fate).
Television, however, has made huge strides forward. There are shows that I used to love (Knight Rider, Frasier, L.A. Law), that were considered quality shows once upon a time (well, maybe not Knight Rider), that are now unwatchable. Thanks to the myriad "oldies" TV stations, we can see just how excrutiating some of these shows were. The Cosby Show was an enormous hit; I was never a huge fan, but trying to watch it now is painful. Even Seinfeld seems stale.
And while current TV is not always stellar, it is of generally high quality. The sitcom is going through some extremely long death throes (the most popular and most critically acclaimed series, Two and a Half Men and Modern Family, are both horrible as far as I'm concerned); even so, we have The Office, and Curb Your Enthusiasm, and Glee (inconsistent, but still innovative, even if they did steal the concept from Cop Rock).
The TV drama seems to be where its at. The Wire is the best thing I have ever played on my DVD player. Ever. Mad Men is outstanding. Lost, despite its flaws, was unlike anything ever seen on TV before. Even formulaic network shows (The Good Wife, Law & Order SVU) are really good formulaic network shows. I'm excited to see shows I haven't seen before that have been well-received (e.g. Dexter, Deadwood, Weeds) the way I used to be excited to see movies that were coming soon.
Maybe it's just that I'm watching more TV than I'm seeing movies. Maybe it's because I'm no longer living in the downtown of a city, where innovative films are easier to find (though I never liked overly artsy stuff). Maybe I'm just older and would like the commercial Hollywood crap more if I were ten years younger. And maybe it's because the biggest televisions are rivalling the smaller movie theaters in terms of screen size.
But maybe, just maybe, TV is better.
Now don't get me wrong, I love movies and always have. I see a lot fewer of them now than I used to (combination of parenthood and no longer working for a film exhibitor), but give me the chance to go and I will. But I also watch a lot less TV than I used to (parenthood again), and I think that there is better stuff to be found on the small screen than the large.
Now, both media have taken great strides over the past few decades. For everyone who bemoans the lack of quality movies these days, I say to you that I think the average movie is better now than ever. The best movies of the past few years (in my opinion, these include The Prestige, Almost Famous, Inglourious Basterds, 40 Year Old Virgin, just to name a few, and I'm sure I'm leaving out some that I like even better than these) rank among the best of all time. There are no movies that surpass Singin' in the Rain or The Godfather, but there weren't any in many other decades, too.
Essentially what I'm saying is that the best movies of the past 10 years stand up to the best movies of any 10-year period, but at the same time the average movie has improved. There are still clunkers (Transformers), but they're no worse than the worst movies of times gone by (Computer Beach Party, Manos: The Hands of Fate).
Television, however, has made huge strides forward. There are shows that I used to love (Knight Rider, Frasier, L.A. Law), that were considered quality shows once upon a time (well, maybe not Knight Rider), that are now unwatchable. Thanks to the myriad "oldies" TV stations, we can see just how excrutiating some of these shows were. The Cosby Show was an enormous hit; I was never a huge fan, but trying to watch it now is painful. Even Seinfeld seems stale.
And while current TV is not always stellar, it is of generally high quality. The sitcom is going through some extremely long death throes (the most popular and most critically acclaimed series, Two and a Half Men and Modern Family, are both horrible as far as I'm concerned); even so, we have The Office, and Curb Your Enthusiasm, and Glee (inconsistent, but still innovative, even if they did steal the concept from Cop Rock).
The TV drama seems to be where its at. The Wire is the best thing I have ever played on my DVD player. Ever. Mad Men is outstanding. Lost, despite its flaws, was unlike anything ever seen on TV before. Even formulaic network shows (The Good Wife, Law & Order SVU) are really good formulaic network shows. I'm excited to see shows I haven't seen before that have been well-received (e.g. Dexter, Deadwood, Weeds) the way I used to be excited to see movies that were coming soon.
Maybe it's just that I'm watching more TV than I'm seeing movies. Maybe it's because I'm no longer living in the downtown of a city, where innovative films are easier to find (though I never liked overly artsy stuff). Maybe I'm just older and would like the commercial Hollywood crap more if I were ten years younger. And maybe it's because the biggest televisions are rivalling the smaller movie theaters in terms of screen size.
But maybe, just maybe, TV is better.
Tuesday, September 14, 2010
Uncertain Odds on my Ambiguous Ignorance
So . . . as I've been beating you over the head with so far in this blog, we are a predicting, gambling, prognosticating species. We are compelled to try to determine what will happen in the future (and we are bad at it). Today I want to write about different types of future randomness, those of uncertainty, ambiguity, and ignorance (or to use dated political analogies, Paul Martin, John Kerry, and George W. Bush).
Let's start with uncertainty. An uncertain situation is one in which we know the potential outcomes and their probabilities, but we don't know what will happen. Think of a card or dice game; using math you can determine the odds of every possible outcome. If you roll two dice, there is a one in 36 chance of getting a total of two. Uncertaintly is extremely useful in this regard (but not in much else - more on that later).
Ambiguity is just as it sounds - you are ambiguous about the probabilities. You have some knowledge of the outcomes and probabilities, but not complete knowledge. One example would be a sporting event. Before a hockey game (assuming you know a little about hockey) you have some idea about who is more likely to win, but you could not put specific odds on it, no matter how sophisticated your math. Bookies try to, but these are based on past performance, thorough knowledge, and intuition. It would be impossible to say with certainty that one team has a 57% chance of winning, partially because it would be impossible to assess whether that was correct unless the same game was played the same way multiple times.
Ignorance describes a situation in which you don't know the outcomes or the probabilities. Imagine trying to guess the outcome of a hockey game if you had never heard of hockey before. You might expect that there would be a winner, but not even that is certain (ties, overtime losses, etc).
Our natural tendency as humans is to select situations where there is uncertainty as opposed to ignorance. We don't like to feel dumb. Ideally, we'd like certainty, but no one has certainty in their life (and trying to find it leads to other social problems, like being certain your religion is correct or certain that Mike Myers would have a long and illustrious film career).
The problem with all of this is that we treat most situations as though they only contained uncertainty, whereas in most real-world situations ignorance rules (and I'm just talking about future randomness here, not even what is, or isn't, in people's heads, though that statement could still be correct). According to Nassim Taleb (author of The Black Swan, what, you haven't read it yet? Go now!), the financial meltdown was because there were a lot people mistaking ignorance for uncertainty. Part of ignorance is blindness to possibilities; many of the financial models used ignored low-probability, high impact situations.
In other words, we treat situations in our day-to-day lives as though they were a card game. If I go to a job interview, I would look at it as we either get the job or not. But I may perform so poorly that my reputuation is sullied and word gets around the industry that I am not to be touched. I may perform so well that I get a better job than the one I applied for. Both are of very low probability, but possible. It is impossible to think of all the possible outcomes (go on, try - I bet you didn't think of the one in which the interviewer is a carnivorous alien who eats you - ok, maybe that one isn't possible). Yet we behave as though we know them.
Next time you think you have a handle on future outcomes, think of Lindsay Lohan. No one knows what she's going to do next. Anything is within the realm of possiblity. Now apply that to your own life. Scared yet?
Let's start with uncertainty. An uncertain situation is one in which we know the potential outcomes and their probabilities, but we don't know what will happen. Think of a card or dice game; using math you can determine the odds of every possible outcome. If you roll two dice, there is a one in 36 chance of getting a total of two. Uncertaintly is extremely useful in this regard (but not in much else - more on that later).
Ambiguity is just as it sounds - you are ambiguous about the probabilities. You have some knowledge of the outcomes and probabilities, but not complete knowledge. One example would be a sporting event. Before a hockey game (assuming you know a little about hockey) you have some idea about who is more likely to win, but you could not put specific odds on it, no matter how sophisticated your math. Bookies try to, but these are based on past performance, thorough knowledge, and intuition. It would be impossible to say with certainty that one team has a 57% chance of winning, partially because it would be impossible to assess whether that was correct unless the same game was played the same way multiple times.
Ignorance describes a situation in which you don't know the outcomes or the probabilities. Imagine trying to guess the outcome of a hockey game if you had never heard of hockey before. You might expect that there would be a winner, but not even that is certain (ties, overtime losses, etc).
Our natural tendency as humans is to select situations where there is uncertainty as opposed to ignorance. We don't like to feel dumb. Ideally, we'd like certainty, but no one has certainty in their life (and trying to find it leads to other social problems, like being certain your religion is correct or certain that Mike Myers would have a long and illustrious film career).
The problem with all of this is that we treat most situations as though they only contained uncertainty, whereas in most real-world situations ignorance rules (and I'm just talking about future randomness here, not even what is, or isn't, in people's heads, though that statement could still be correct). According to Nassim Taleb (author of The Black Swan, what, you haven't read it yet? Go now!), the financial meltdown was because there were a lot people mistaking ignorance for uncertainty. Part of ignorance is blindness to possibilities; many of the financial models used ignored low-probability, high impact situations.
In other words, we treat situations in our day-to-day lives as though they were a card game. If I go to a job interview, I would look at it as we either get the job or not. But I may perform so poorly that my reputuation is sullied and word gets around the industry that I am not to be touched. I may perform so well that I get a better job than the one I applied for. Both are of very low probability, but possible. It is impossible to think of all the possible outcomes (go on, try - I bet you didn't think of the one in which the interviewer is a carnivorous alien who eats you - ok, maybe that one isn't possible). Yet we behave as though we know them.
Next time you think you have a handle on future outcomes, think of Lindsay Lohan. No one knows what she's going to do next. Anything is within the realm of possiblity. Now apply that to your own life. Scared yet?
Friday, September 10, 2010
Meet Your Professor, Dr. Chuckles
So . . . here's my last installment of my back-to-school week blogs. Today's posting has to do with the notion of professor as entertainer (hence "chuckles"; I could have used Dr. Giggles, but that would have conjured notions of a bad horror movie starring the mentally-challenged guy from L.A. Law as a psycho doctor). Steve Martin claimed in his book Born Standing Up that if he hadn't made it as a stand-up comic, he would have been a college professor, because then he would still be performing before a room every day. I don't necessarily think this sentiment is rare.
According to a recent survey of over 10,000 Ontario university students, what students want is an motivating, enthusiastic, entertaining professor. The OUSA survey reports that these qualities are desired by 74.6% of students, which only eclipsed by "delivers interesting, well-prepared and organized lectures" at 83.7%. The next-most important factor, the ability to communicate in multiple ways, is quite a bit less important (52.4% of respondents say it is necessary). From there, the more nuts-and-bolts elements of the course and listed, and none is chosen by more than 25% of students (things like outlining expectations, availability of the professor to meet, etc.).
So what common thread holds the three most important factors together? Communication. Style. Delivery. Not content. Which is a major disconnect with what professors consider important. Many profs believe that content is king, and that a) the students are there to learn the content, and so the onus is on them to pay attention and b) they have a responsibility to deliver the content, and there is no time for anything but.
The first point, that students should just pay attention, is fine in theory but wholly unrealistic. Could you sit and listen to a content-heavy lecture for 90 minutes, even on a topic have an interest in? What many profs forget is that the topic is not nearly as interesting to the students as it is to them. Most students will begin by paying attention, and at least try, but after 10-15 minutes of droning its hard to focus. Being enthusiastic and engaging helps combat this.
As far as content delivery goes, we profs do have a responsibility. But too often, "delivery" is considered to be the transmission of information, without consideration for receipt. Delivery is only complete when the information is received, not when it is sent. If FedEx regarded delivery in this way, they would be out of business.
Which raises the question of how to be engaging. Oh, how many of my profs from undergrad could have benefited from a basic teaching or communications course. First and foremost, consider the perspective of the students and what will interest them (in terms of style - I'm not advocating only teaching the most interesting content). Use humour, or examples, or a problem-solution approach. Make the information relevant. Give over some class time to practical problem-solving.
Humour works well, but I know that not everyone is as naturally funny as I. It is also potentially dangerous, because you don't want to be seen as a clown (as in this clip - fast forward to 3:19). The key factor is to be enthusiastic, i.e. behave as though you actually want to be there. Instead of counting on the information being interesting (because it is to you), consider what will make it interesting for the students. The book Made To Stick is a great read and has excellent points to make on this topic.
Anyway, now we're back to school, and next week I'll get back to my usual topics. And only 11 weeks till the end of semester!
According to a recent survey of over 10,000 Ontario university students, what students want is an motivating, enthusiastic, entertaining professor. The OUSA survey reports that these qualities are desired by 74.6% of students, which only eclipsed by "delivers interesting, well-prepared and organized lectures" at 83.7%. The next-most important factor, the ability to communicate in multiple ways, is quite a bit less important (52.4% of respondents say it is necessary). From there, the more nuts-and-bolts elements of the course and listed, and none is chosen by more than 25% of students (things like outlining expectations, availability of the professor to meet, etc.).
So what common thread holds the three most important factors together? Communication. Style. Delivery. Not content. Which is a major disconnect with what professors consider important. Many profs believe that content is king, and that a) the students are there to learn the content, and so the onus is on them to pay attention and b) they have a responsibility to deliver the content, and there is no time for anything but.
The first point, that students should just pay attention, is fine in theory but wholly unrealistic. Could you sit and listen to a content-heavy lecture for 90 minutes, even on a topic have an interest in? What many profs forget is that the topic is not nearly as interesting to the students as it is to them. Most students will begin by paying attention, and at least try, but after 10-15 minutes of droning its hard to focus. Being enthusiastic and engaging helps combat this.
As far as content delivery goes, we profs do have a responsibility. But too often, "delivery" is considered to be the transmission of information, without consideration for receipt. Delivery is only complete when the information is received, not when it is sent. If FedEx regarded delivery in this way, they would be out of business.
Which raises the question of how to be engaging. Oh, how many of my profs from undergrad could have benefited from a basic teaching or communications course. First and foremost, consider the perspective of the students and what will interest them (in terms of style - I'm not advocating only teaching the most interesting content). Use humour, or examples, or a problem-solution approach. Make the information relevant. Give over some class time to practical problem-solving.
Humour works well, but I know that not everyone is as naturally funny as I. It is also potentially dangerous, because you don't want to be seen as a clown (as in this clip - fast forward to 3:19). The key factor is to be enthusiastic, i.e. behave as though you actually want to be there. Instead of counting on the information being interesting (because it is to you), consider what will make it interesting for the students. The book Made To Stick is a great read and has excellent points to make on this topic.
Anyway, now we're back to school, and next week I'll get back to my usual topics. And only 11 weeks till the end of semester!
Thursday, September 9, 2010
Getting Into a Group Scene
So . . . continuing with my "back to school" theme (the time of year, not the movie, though that Triple Lindy was sweet), today I'm talking about group work. I include group work in some of my courses, and typically students dislike it. It also presents many headaches for me too. So why do it?
Besides the fact that I'm a sadistic jerk, I include it because most of the work that anyone does in any job includes interactions with others. Outside of lighthouse keeper or Unabomber, there aren't many jobs where you never have to work in a group situation. Sure, relying on others is a pain in the ass, but you can't do everything yourself (even I know that, and apparently I have a superman complex - though I don't get the same reaction when I wear colorful spandex). Students should learn early that they will be disappointed by their colleagues, that other people will do as little work as possible, and that coming together to reach a common goal is a painful process.
This year I'm instituting some changes in how I manage my course group work. First of all, in at least one of my courses I'm going to assign the groups myself, rather than letting the students pick them. There are advantages to letting the students pick their own groups, but two big problems. First, that's not how it works in the real world. Second, there are invariably one or two groups that are composed of those who could not form groups on their own, and they could be at a disadvantage because there is then the need for members to get to know one another, an issue that would not be present in other groups.
The second change I'm instituting is that each group must meet with me a minimum of once during the term (prior to the assignment due date). This will not only allow the students to ask questions and get guidance from me (I may not know everything, but I will be the person grading the assignments, after all), but also allow me to observe group dynamics. Too often I have only heard about intra-group problems on the day the assignment is turned in, and that is too late for me to do anything about it (even Superman couldn't go back in time, I don't count the spinning-round-the-world thing in the first movie, it was a cop out). Not that I especially want to play cop to a misbehaving group, but I also don't want five people's grades to be bad because of one disruptive member.
Which brings me to the last point about group work - assigning grades. The first time I taught a course that included a group component, I asked for peer evaluation of group members. Never again. Led to many issues, vendettas, and at least one threatening e-mail. It seems that every group agreed amongst themselves to evaluate everyone the same, and then everyone did just the opposite. I had students complaining that other group members got higher grades than them, even though they all evaluated each group member the same (which they hadn't). I had one group in which four members downplayed the contributions of the hardest-working member, because they were all friends with each other and she wasn't. Too much pettiness.
My typical way of dealing with this is to make clear at the outset of the course that all group members will receive the same grade, regardless of anything that goes on within the group. Therefore, students should choose group members carefully. However, this method is far from perfect (especially if you are imposing groups on students). It assumes all students have the same goal (to do well) and are fair-minded (hah! why should students be any different than the rest of us?). But it is the most egalitarian. Furthermore, it should provide an incentive for those who want higher marks to manage the groups and light a fire under the butts of the lazier members. Or at least do more work and cover for their coasting.
And if that isn't like working in a group in the real world, I don't know what is.
Besides the fact that I'm a sadistic jerk, I include it because most of the work that anyone does in any job includes interactions with others. Outside of lighthouse keeper or Unabomber, there aren't many jobs where you never have to work in a group situation. Sure, relying on others is a pain in the ass, but you can't do everything yourself (even I know that, and apparently I have a superman complex - though I don't get the same reaction when I wear colorful spandex). Students should learn early that they will be disappointed by their colleagues, that other people will do as little work as possible, and that coming together to reach a common goal is a painful process.
This year I'm instituting some changes in how I manage my course group work. First of all, in at least one of my courses I'm going to assign the groups myself, rather than letting the students pick them. There are advantages to letting the students pick their own groups, but two big problems. First, that's not how it works in the real world. Second, there are invariably one or two groups that are composed of those who could not form groups on their own, and they could be at a disadvantage because there is then the need for members to get to know one another, an issue that would not be present in other groups.
The second change I'm instituting is that each group must meet with me a minimum of once during the term (prior to the assignment due date). This will not only allow the students to ask questions and get guidance from me (I may not know everything, but I will be the person grading the assignments, after all), but also allow me to observe group dynamics. Too often I have only heard about intra-group problems on the day the assignment is turned in, and that is too late for me to do anything about it (even Superman couldn't go back in time, I don't count the spinning-round-the-world thing in the first movie, it was a cop out). Not that I especially want to play cop to a misbehaving group, but I also don't want five people's grades to be bad because of one disruptive member.
Which brings me to the last point about group work - assigning grades. The first time I taught a course that included a group component, I asked for peer evaluation of group members. Never again. Led to many issues, vendettas, and at least one threatening e-mail. It seems that every group agreed amongst themselves to evaluate everyone the same, and then everyone did just the opposite. I had students complaining that other group members got higher grades than them, even though they all evaluated each group member the same (which they hadn't). I had one group in which four members downplayed the contributions of the hardest-working member, because they were all friends with each other and she wasn't. Too much pettiness.
My typical way of dealing with this is to make clear at the outset of the course that all group members will receive the same grade, regardless of anything that goes on within the group. Therefore, students should choose group members carefully. However, this method is far from perfect (especially if you are imposing groups on students). It assumes all students have the same goal (to do well) and are fair-minded (hah! why should students be any different than the rest of us?). But it is the most egalitarian. Furthermore, it should provide an incentive for those who want higher marks to manage the groups and light a fire under the butts of the lazier members. Or at least do more work and cover for their coasting.
And if that isn't like working in a group in the real world, I don't know what is.
Tuesday, September 7, 2010
Let's Have a Discussion about Lectures
So . . . this is the first week back for school and university, and I thought I would dedicate this week's posts to education, particularly the higher kind. As someone with a vested interest in the topic I am keen to learn the opinions of my reader(s). Today I am going to discuss something that I feel very strongly about, which is how information and knowledge are passed to students.
In yesterday's Globe and Mail, Julia Christensen Hughes, the dean of the faculty of business at the University of Guelph, basically expressed that lectures do not work and that teaching at universities must be more interactive (in the classical sense, not the electronic sense). Simply reading/reciting/giving lectures to (or at) students is ineffective, she says, and you must pause and ask questions of the students (gasp!) or solicit questions from them (double gasp!). Predictably, several university professors wrote letters to the editors disagreeing with her.
I, however, happen to agree with her points. From the time I was a university student myself I hated lectures, and I think I hate giving them more than I hated hearing them. We no longer live in an age where all of the knowledge on a particular topic resides in peoples heads - we have books, websites, etc. If you want to know some information, as Hughes says, you google it. I remember sitting in lectures in undergrad and thinking that I would just prefer if the professor would print out the lecture and give it to us, so that we can read it on our own time. Really, does hearing a professor talk add value over the written word? It can, and with some profs it does, but in many cases the answer is no.
Where value can be added is in teaching students to think critically, make decisions, and express opinions. In other words, participate in a discussion. This seems to be more the norm in business schools than other faculties (disclaimer: I am not saying that all business schools do this and all other faculties do not). In my courses, the learning goals are for students to develop their own way to face business situations. I assign textbook readings, but in class the discussion of the text focuses on clarifying what is not well understood, challenging conventional wisdom, and applying the knowledge. To simply repeat the text without adding value (as was the case in many an undergrad lecture I attended) is pointless.
Ultimately, the students in university are going to need to get jobs (well, most of them) and these jobs will entail the performance of tasks and the making of decisions (well, most of them - not academia). Lectures are not the best training method for these end goals. Lecture with discussion is a different story.
Leading the class in discussion requires different skills, knowledge, and experience than do lectures. I am not saying that a professor who can do one or the other is necessarily smarter or more knowledgeable. I for one could not deliver 1-3 hour lectures without wanting to hang myself (I don't like to hear myself talk too much . . . no, really I don't . . . writing this blog, that's another matter) and I know I would not be good at it. And I can understand why university lecturers would push back against this idea, as it would lead to greater uncertainty as well as the need to work in order to acquire new skills.
I certainly wasn't very good at leading class discussions when I started, but I think I have improved with time: practice makes perfect (no, I don't think I'm perfect, it's an expression). Which is exactly the point - I'm advocating giving the students the opportunity to practice expressing their opinions and analytical skills in a safe environment, before they have to go out and do so in the real world.
In yesterday's Globe and Mail, Julia Christensen Hughes, the dean of the faculty of business at the University of Guelph, basically expressed that lectures do not work and that teaching at universities must be more interactive (in the classical sense, not the electronic sense). Simply reading/reciting/giving lectures to (or at) students is ineffective, she says, and you must pause and ask questions of the students (gasp!) or solicit questions from them (double gasp!). Predictably, several university professors wrote letters to the editors disagreeing with her.
I, however, happen to agree with her points. From the time I was a university student myself I hated lectures, and I think I hate giving them more than I hated hearing them. We no longer live in an age where all of the knowledge on a particular topic resides in peoples heads - we have books, websites, etc. If you want to know some information, as Hughes says, you google it. I remember sitting in lectures in undergrad and thinking that I would just prefer if the professor would print out the lecture and give it to us, so that we can read it on our own time. Really, does hearing a professor talk add value over the written word? It can, and with some profs it does, but in many cases the answer is no.
Where value can be added is in teaching students to think critically, make decisions, and express opinions. In other words, participate in a discussion. This seems to be more the norm in business schools than other faculties (disclaimer: I am not saying that all business schools do this and all other faculties do not). In my courses, the learning goals are for students to develop their own way to face business situations. I assign textbook readings, but in class the discussion of the text focuses on clarifying what is not well understood, challenging conventional wisdom, and applying the knowledge. To simply repeat the text without adding value (as was the case in many an undergrad lecture I attended) is pointless.
Ultimately, the students in university are going to need to get jobs (well, most of them) and these jobs will entail the performance of tasks and the making of decisions (well, most of them - not academia). Lectures are not the best training method for these end goals. Lecture with discussion is a different story.
Leading the class in discussion requires different skills, knowledge, and experience than do lectures. I am not saying that a professor who can do one or the other is necessarily smarter or more knowledgeable. I for one could not deliver 1-3 hour lectures without wanting to hang myself (I don't like to hear myself talk too much . . . no, really I don't . . . writing this blog, that's another matter) and I know I would not be good at it. And I can understand why university lecturers would push back against this idea, as it would lead to greater uncertainty as well as the need to work in order to acquire new skills.
I certainly wasn't very good at leading class discussions when I started, but I think I have improved with time: practice makes perfect (no, I don't think I'm perfect, it's an expression). Which is exactly the point - I'm advocating giving the students the opportunity to practice expressing their opinions and analytical skills in a safe environment, before they have to go out and do so in the real world.
Wednesday, September 1, 2010
Putting Dr. Zamboni on Ice
So . . . a doctor named Paolo Zamboni has been trying for some time to get people to think differently about multiple sclerosis. MS has been long perceived to be an auto-immune disorder that causes degeneration of the nerve endings, which in turn causes a wide variety of symptoms, and eventually physical and mental disability. Dr. Zamboni wants to begin trials in Canada, lots of MS patients want to undergo the surgery, but the Canadian government has decided not to fund the trials.
Ok, that was kind of dry and probably not what you have come to expect from my blog entries. But it does relate - because it may come down the same type of errors in thinking that I'm so fond of writing about. I hedge my claim because I am not a medical doctor, and cannot assess the merit of the medical claims of Dr. Zamboni; I can only speak to the decision-making biases that surround his ideas.
Essentially what is going is that someone is trying to help people with MS, and the way in which he is trying to do it is different the accepted conceptualization of the disease. Dr. Zamboni is claiming that MS is caused by iron build-up in the brain due to vein blockages, and therefore treatable with angioplasty (surgery in which veins are opened up). Data regarding this new procedure is mixed but apparently there is evidence that it can work.
This all reminds me of the story of Heliobacter Pylori. Once upon a time, doctors all knew that peptic ulcers were caused by stress, diet, or blood type. Two Australian doctors, Barry Marshall and Robin Warren, came up with a crazy idea - ulcers were caused by bacteria. All the other doctors laughed at them; silly people thinking bacteria could cause ulcers. To prove them wrong, Barry Marshall drank a beaker full of H. Pylori bacteria (sort of like Sam Beckett proving that his theories of time travel were correct on Quantum Leap). Lo and behold, Marshall developed ulcers, that he then treated with antibiotics. Despite this (and other) proof, the medical establishment took a long time to accept this new idea regarding ulcers. And then they did, and everyone lived happily ever after, especially Drs. Marshall and Warren, who won Nobel Prizes.
Is Dr. Zamboni the next Marshall and Warren? I don't know, and the point is that no one does (he does share one similarity - while he didn't do the procedure on himself, he did do it on his wife, to apparent success). He has a method that has worked. There is risk involved (as with any procedure), but as long as the people willing to undergo the trials are aware of the risks, they should be allowed to volunteer themselves. The sense around the government's decision is that it was driven less by budget and more by dogma - MS is an autoimmune disorder, so why look for other causes? Prominent doctors are applauding the no-fund decision.
(I also think that if Dr. Zamboni had a different name, people might be more accepting of his ideas, especially in our hockey-mad country. Unfortunate but probably true)
The point is that it can be difficult to accept new ways of looking at old problems. Evolutionarily, this makes sense; the existing system works ok, and new ideas can be dangerous, so let's be biased against new ideas. The problem with this is that a few new ideas are improvements, and impediments to the acceptance are harmful in the long-term. The costs here are not huge, and no one is being coerced into anything. Just pony up the dough and lets see if we can help people.
Ok, that was kind of dry and probably not what you have come to expect from my blog entries. But it does relate - because it may come down the same type of errors in thinking that I'm so fond of writing about. I hedge my claim because I am not a medical doctor, and cannot assess the merit of the medical claims of Dr. Zamboni; I can only speak to the decision-making biases that surround his ideas.
Essentially what is going is that someone is trying to help people with MS, and the way in which he is trying to do it is different the accepted conceptualization of the disease. Dr. Zamboni is claiming that MS is caused by iron build-up in the brain due to vein blockages, and therefore treatable with angioplasty (surgery in which veins are opened up). Data regarding this new procedure is mixed but apparently there is evidence that it can work.
This all reminds me of the story of Heliobacter Pylori. Once upon a time, doctors all knew that peptic ulcers were caused by stress, diet, or blood type. Two Australian doctors, Barry Marshall and Robin Warren, came up with a crazy idea - ulcers were caused by bacteria. All the other doctors laughed at them; silly people thinking bacteria could cause ulcers. To prove them wrong, Barry Marshall drank a beaker full of H. Pylori bacteria (sort of like Sam Beckett proving that his theories of time travel were correct on Quantum Leap). Lo and behold, Marshall developed ulcers, that he then treated with antibiotics. Despite this (and other) proof, the medical establishment took a long time to accept this new idea regarding ulcers. And then they did, and everyone lived happily ever after, especially Drs. Marshall and Warren, who won Nobel Prizes.
Is Dr. Zamboni the next Marshall and Warren? I don't know, and the point is that no one does (he does share one similarity - while he didn't do the procedure on himself, he did do it on his wife, to apparent success). He has a method that has worked. There is risk involved (as with any procedure), but as long as the people willing to undergo the trials are aware of the risks, they should be allowed to volunteer themselves. The sense around the government's decision is that it was driven less by budget and more by dogma - MS is an autoimmune disorder, so why look for other causes? Prominent doctors are applauding the no-fund decision.
(I also think that if Dr. Zamboni had a different name, people might be more accepting of his ideas, especially in our hockey-mad country. Unfortunate but probably true)
The point is that it can be difficult to accept new ways of looking at old problems. Evolutionarily, this makes sense; the existing system works ok, and new ideas can be dangerous, so let's be biased against new ideas. The problem with this is that a few new ideas are improvements, and impediments to the acceptance are harmful in the long-term. The costs here are not huge, and no one is being coerced into anything. Just pony up the dough and lets see if we can help people.
Subscribe to:
Posts (Atom)