Shelly Palmer

Social TV Ratings – Why Advertisers Should Be Careful Of What They Wish For

Allow me to start with a personal declaration: As a media researcher who has tracked the word of mouth generated by many advertising campaigns, I am excited by the impending launch of social TV ratings. Slated to come on stream in Q4 this year, social TV ratings should catapult the concept of TV audience ratings from a simple viewer measurement to a consumer response metric. Social TV ratings should provide a standardized quantification of consumers’ social media involvement with a TV program.

Nonetheless, in case we all become too rapidly enthralled by the specter of social TV ratings, let me also add “Not so fast!” Why do I say this?

A New Way of Measuring

Nielsen and Twitter announced this important initiative in December 2012. Billed as the Nielsen Twitter TV Rating (NTTR), it is a revolutionary new TV ratings statistic, primarily based on the tweets each TV show receives.

The NTTR brings the promise of a qualitative, behavioral measure to overlay on standard TV ratings. Currently, in order to determine their total TV presence, some media buyers simply add up their TV ratings across the shows in which their advertising appeared. In effect, all they are doing is weighing their total audience delivery. Indeed, a campaign’s total TV rating achievement is often referred to as its TV weight.

I think the issue may be way more complex and nuanced than just comparing a campaign’s total TV ratings weight with its social TV ratings. Typically, there would seem to be a simple three-step process:

It’s Not That Simple, Though

For any advertiser or agency hoping that adding up social TV ratings may be akin to summing up audience TV ratings, I offer the following two observations to consider:

First, take the Super Bowl—the granddaddy of all TV shows generating word of mouth. Counter intuitively, not all advertisers who appear in this game see an actual uplift in their brand’s word of mouth. Indeed, according to Keller Fay, the leading word of mouth researcher, about 10%-15% of advertisers in the Super Bowl can see a decline in their word of mouth in the week after the game. This unexpected outcome would not have been anticipated by the above process.

Secondly, in late 2010, the Word of Mouth Marketing Association honored me with their Gold Award for Research for constructing a multiple regression analysis that demonstrated the connection between Sony Electronics’ advertising and their subsequent word of mouth. This connection was not a straightforward relationship. One of my key findings was that ad-generated word of mouth depended not only on Sony’s media weight, but also on their share of voice. In other words, ad-generated word of mouth was seen to be competitive.

Estimating a campaign’s word of mouth is not like calculating ad awareness, which is largely a function of the total ad weight and its weekly reach. A more complex relationship may exist which can make word of mouth modeling more like sales modeling.

On the upside, UM has undertaken a number of special analyses that frequently demonstrate a strong relationship between a sponsor’s TV show and the sponsor’s consequent word of mouth. In this case, the sponsor’s recipe for success is clear:

For example, a perceived older brand would almost always need to exude an evident sense of humor, or even young-at-heart irreverence, if it were being integrated into The Colbert Report.

The upcoming release of social TV ratings justifiably enthuses many of us in the ad business. Yet in order for social TV ratings truly to succeed, its advertising impact will need to be verified and validated. To their credit, both Twitter and Nielsen have an impressive array of ad effectiveness experts on their respective benches. Make no mistake: if Twitter and Nielsen can get beyond the issues I’ve outlined here and categorically prove the effectiveness of social TV ratings, it will upend the TV airtime market as we know it.