Wednesday, January 30, 2013

When the Ratings Are/Aren't What You Expected: A Process for Uncovering What's Important

Ratings data is noisy.

Along the pathway to “is this real?” meaningful data rests alongside the spurious or even potentially misleading.

And usually when those results are better or (fortunately less often for our stations) worse than expected, there are questions.


Also usually within the data there are some (but likely not all) answers.

You'll increase your changes of finding the meaningful when you have a plan before going under the hood.

Use your ratings data and knowledge of what happened at your station and in the market to develop a list of as many possible scenarios that could be contributing factors. This list will direct you to specific areas of inquiry. It will also help give your investigation focus while still allowing you the freedom to go down some rabbit holes without the fear of getting hopelessly lost or sidetracked. 

Let’s look at one example – a big TSL swing – and some possible scenarios:
  •  There was real change in usage because of something that changed on the station
  •  There was a real change in usage not because something changed on the station but rather in the market or on a competitor
  • There were more/less heavy radio users, regardless of format, in the sample with overall usage that deviated from the norm
  • There were more or less heavy users of your format, station or a competitor’s station in the sample
  • There was a change in the demographic composition of the sample overall or in the sample of your format’s lifegroup
  • Proportionality was/wasn’t an issue
  • Geography/zip code returns was/wasn’t an issue
  • There was a significant change in the percent of employed fans of your station who are in the sample
  • There was a change in occasions of listening or in vertical or horizontal cuming (revisit bullets 1-5)
  • There was a single or handful of respondents that skewed a particular cell
  • There was a station identification issue (diaries) or a crediting error

Some of the above bullets can be assessed with relative ease, but others require more investigation, time, and a strong working knowledge of your ratings analysis software.  Regardless, getting a handle on the ‘degree of truth’ in your theories will go a long way in providing insight (of course you'll want to develop a different line of hypotheses for other issues).

Wading through ratings data is time consuming and some of your efforts won’t lead anywhere (we did say the data is noisy). But if you're going to get as much of a handle as you can on things, you'll need to do a deep dive. Having a plan helps.

And yet, even with as much noise removed as possible, conclusions may still be a bit murky. Sometimes we’ll need to call on the recent past to help us interpret what we’re seeing and give us guidance for the future.

But finding and then spending time with the critical information will increase the probability of being closer to the truth.

Two quotes from Nate Silver’s “The Signal and the Noise: Why Most Predictions Fail - but Some Don’t” sum things up pretty well:

“…immersion in a topic will provide disproportionately more insight than an executive summary.”

And,

“…success is determined by some combination of hard work, natural talent, and a person’s opportunities and environment – in other words, some combination of noise and signal.”


PS - “The Signal and the Noise: Why Most Predictions Fail - but Some Don’t” is an excellent read if you live in a world where data-driven assumptions, reporting and forecasting are a way of life - or if you play poker, bet on sporting events, or simply watch the local TV weather casts. Finishing the book at a time when such a large amount of ratings analysis is going on here at Albright & O'Malley & Brenner inspired the camera angle for this blog. 

No comments: