Researchers, Stop Treating What People Say As If It's What People Really Do
I'm tired of research companies presenting the results of what people say as if it factually represents what they did or are likely to do. Come on. This is the era of digital media and marketing and Big Data. We can have massive direct measures of most consumer media and consumption behaviors, and we have extraordinary abilities to store, analyze and present that information. Why do we still rely so much on sending out surveys asking people to tell us about behaviors that we can much more accurately directly measure?
Let's look at some trade stories from these past few days. Earlier this week, TV Week reported that 87% of DVR owners don't skip movie ads. Did they use readily available anonymous DVR log data on tens of millions of viewers from TiVo or Kantar or Rentrak to find the answer? No. Instead, the Worldwide Motion Picture Group asked 1,500 viewers what they "think" they do. Raise your hand if you think this is an accurate representation of what people who DVR content really do?
Yesterday, research company Gfk told us that streaming video may erode traditional TV viewing, since 33% of those surveyed say they watch less "regular" TV. Did they check the above sources, or Nielsen or comScore, all of which have some capacity to answer the question with actual observed data? No. Also yesterday, Forrester told us that younger TV viewers are moving online, citing that there has been an 18% hike in online viewing in the U.S. between 2012 and 2010. Did this data come from any one of a number of log-based panels of online video viewing? No. It came from what 60,000 survey respondents said they did, not what they actually did.
Are research companies doing this because they want to deceive us? Of course not. These are reputable firms. Instead, I think they keep to this model because that's what they've done for years and they haven't felt the need to change yet.
But now it's time that survey and other declared data is put into its place as a distant second to observed data in measuring media behaviors. Heres why:
Accuracy. We should use the best available data when we compile and present our research. If we can get factual data on actual behaviors from a massive sample of consumers, why should we rely on remembered data from surveys and polls? Diaries have given way to People Meters. So too should surveys.
Reporting bias. As has been widely reported in many studies, if you ask people about their media behaviors, they tend to significantly underreport their TV viewing and over-report their computer and Internet usage. If you ask them what they watch on TV, it seems everybody spends most of their time watching Discovery, History Channel, C-Span and BBC America. Nobody ever watches reality shows or reruns. Log files of actual viewing from TV set-top boxes tell a completely different story -- as do the ratings.
Panel bias. Some people like to take surveys. Many do not. Just as we've learned that folks who click on ads do not represent most online media consumers -- see comScores Natural Born Clickers work -- the same bias exists for those who are willing to take online surveys. You can try to balance your panel according to sex/age demographics, but you can't entirely take the bias away.
People are what they do, not what they say they do. Let's stop pretending that the latter represents the former (or is any way factual). What do you think?